
We are used to identity being a human problem. A person signs in, gets assigned roles, and systems enforce access based on policy. Even when we talk about “non-human identities,” the mental model still tends to be infrastructure: service accounts, API keys, workload identities.
Agent-to-agent interaction breaks that model.
In the emerging architecture of AI-integrated platforms, agents will not only assist with one product. They will interact with external agents, negotiate APIs, coordinate tasks across tools, and execute actions that span organisations.
This is barely discussed today, which is exactly why it deserves attention.
Why is this different from traditional integration
Cross-platform integrations are not new. What changes is the nature of decision-making.
Classic integrations are deterministic. A webhook fires. An API is called. A workflow runs. The system does what it was programmed to do.
Agents introduce delegation and interpretation. They decide what to call, when to call it, and how to combine outcomes. They reason over ambiguous inputs and incomplete context. They also learn patterns from interactions over time. That means “correct behaviour” is not just a matter of validating a token. It becomes a matter of validating intent, scope, and safety in motion.
When an external agent calls your agent, you are not just receiving a request. You are accepting an upstream decision.
The core identity question: Who is the actor?
With humans, the actor is clear. With service accounts, the actor is a system you control. With agents, the actor becomes layered.
Is the actor the user who initiated the request? The agent who interpreted the request? The platform that hosts the agent? The organisation that deployed it? Or the chain of agents that influenced the final action?
In real systems, it will often be all of the above. Without a shared way to represent that chain, we will end up with brittle trust based on convenience: “This request came from a reputable provider, so it must be fine.”
That is not a security model. It is a hope model.
Also Read: Embracing AI evolution: The crucial role of data management and cybersecurity in AI success
We need delegation integrity
Authentication tells you who is calling. It does not tell you whether the caller has the right to ask for what they are asking.
Agent-to-agent systems will need to prove not just identity, but delegation. The receiving system should be able to answer:
- Who delegated this action?
- What was the approved scope?
- What constraints were in place?
- What context was used to make the decision?
- How recent is the authorisation, and can it be revoked?
Today, most inter-org trust collapses into static secrets, broad OAuth scopes, and contractual assumptions. Those mechanisms were designed for services, not for autonomous decision engines.
Authorisation becomes dynamic and contextual
In a multi-agent world, authorisation cannot be a single static gate. It has to be context-sensitive and risk-aware.
If an external agent is asking to pull a file, the risk depends on the file type, its sensitivity, the destination, the current anomaly signals, and the actor chain. If an external agent is asking to trigger a workflow, the risk depends on blast radius, downstream integrations, and reversibility.
This forces a new discipline: designing “agent actions” as a controlled interface, rather than letting agents operate through broad administrative permissions. If your agent can do anything your user can do, you have effectively created a second user with fewer human constraints.
The trust boundary will shift from “app” to “action”
The safest mental model is that identity moves from being account-centric to action-centric.
Instead of granting an agent broad access to a system, you grant it the ability to perform specific actions under specific constraints. Each action has a policy. Each action is logged with intent and provenance. Each action can be throttled, sandboxed, or reversed.
This is already how high-trust systems are built. The difference is that it will need to become mainstream, because agents will otherwise accumulate privilege faster than governance can keep up.
Decision cascades in multi-agent systems
Agent-to-agent trust is only half the challenge. The other half is what happens when agents form chains.
Future systems will call other agents and trigger downstream automations.
The failure mode here is not “one wrong answer.” It is “one wrong answer that becomes an input signal for ten other systems.”
Also Read: The new cybersecurity battlefield: Protecting trust in the age of AI agents
Cascades are not hypothetical
Organisations already have cascading automation. A monitoring alert triggers a ticket, which triggers an on-call action, which triggers a deployment rollback. The difference is that these chains are built from deterministic rules.
Agents make the chain probabilistic.
If an agent misclassifies an event, it may call the wrong downstream tool. If it overconfidently infers intent, it may trigger a workflow that was never meant to run. If it misreads context, it can propagate that error through multiple dependent actions.
The scary part is that each step in the chain can look locally reasonable. The system “followed the process.” The process was simply driven by a flawed inference.
Why we lack containment models
Traditional containment models assume discrete incidents: isolate the host, rotate credentials, block the IP, patch the vulnerability.
Cascades do not behave like that. They are distributed and asynchronous. They cross product boundaries. They may involve third-party agents. By the time you notice something is wrong, the downstream effects have already occurred in multiple systems.
This is why we need cascade containment models. Not as an abstract research area, but as an engineering requirement for systems that allow agents to trigger actions.
Principles for cascade containment
A mature cascade model starts with acknowledging that not every agent output should be executable.
Some practical principles are worth adopting early.
- Bounded autonomy: Agents should have clear limits on what they can execute without confirmation. Those limits should tighten as the blast radius grows.
- Progressive trust: An agent earns autonomy through reliable behaviour and predictable outcomes over time, not through initial configuration. New agents, new integrations, and new workflows should start constrained.
- Circuit breakers: If an agent triggers unusual rates of actions, unusual cross-system combinations, or repeated failures, automation should pause. This is deliberate friction that appears when the system deviates from normal.
- Risk scoring at the edge: Each action request should be evaluated not only by identity, but by context and potential impact. High-impact actions should require stronger proof and stricter gating.
- Explicit rollback paths: Actions that are hard to reverse should require higher certainty. If rollback is easy, you can allow more autonomy.
- Provenance and traceability: Every agent decision that leads to an action should carry a trace of what triggered it, what context was used, what downstream calls were made, and what constraints were applied. Without traceability, containment becomes impossible.
Users will demand autonomy, then blame it
As agents become more capable, the pressure to let them act will grow. Users will want “just handle it” experiences. And when something goes wrong, the same users will be surprised that the system acted without permission in a nuanced case.
This is why guardrails cannot be an afterthought. They have to be part of the product contract. The system should clearly communicate what it can do autonomously, what it will ask before doing, and how it will behave under uncertainty.
The goal is not to reduce automation. The goal is to make autonomy safe.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.
The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.
Join us on Instagram, Facebook, X, and LinkedIn to stay connected.
The post Agent-to-agent trust: The next identity challenge appeared first on e27.
