AI Agents April 17, 2026 · 4 min read · By Forum Desk

AI Agents Are the New Identity Problem — And Nobody's PAM Is Ready

As autonomous agents start spending money, sending code reviews and deploying infrastructure, the industry is scrambling to extend machine-identity controls to non-deterministic actors.

  • #ai-agents
  • #identity
  • #nhi
Dark control-room screens illustrating autonomous agent security

Autonomous agents built on large language models have quietly moved from demo-ware to production in the last six quarters. What started as retrieval-augmented copilots is now a generation of systems that hold long-term memory, make tool calls, and act on behalf of a human — often with the human’s full delegated authority. The security question that has emerged faster than any vendor category can fill: what identity does an agent present, and who is accountable for what it does?

Why classic PAM stops at the front door

Traditional privileged access management assumes a deterministic workload. A service account authenticates, calls an API, the call either succeeds or fails, and every decision is auditable. Agents break that assumption on three axes. They are non-deterministic: the same prompt can produce different tool-call sequences on consecutive runs. They are recursive: an agent can spawn sub-agents that inherit scoped credentials but operate on different goals. And they are long-lived: a research agent running for twelve hours traverses enough context windows that attribution to a single human operator becomes statistically meaningless by hour six.

Three early patterns

Practitioners we spoke to across three continents are converging on three controls:

  1. Short-lived, narrowly-scoped credentials issued per task — OIDC tokens that expire within minutes, scoped to a specific resource and a specific agent run ID.
  2. Tool-use policy engines that sit between the agent and external APIs, evaluating each call against a declarative allow-list the human operator approved up-front.
  3. Replayable session logs — not just a text transcript, but a structured record of every tool call with its exact arguments, which vendors like Anthropic and a growing open-source group are standardising under the name agent provenance.

What this means for the SOC

SIEMs are already seeing agent-originated traffic patterns that look nothing like a human analyst and nothing like a traditional bot. Expect a mid-year push for agent-aware detection rules and a renewed conversation about whether the SOC runbook still applies when the attacker, or the legitimate user, isn’t a person.