Posted on Leave a comment

AI agents are already inside your systems, but who’s controlling them?

The enterprise AI story has moved well beyond chatbots and novelty pilots. In large companies, AI agents are now being connected to finance systems, customer databases, internal knowledge bases, payment rails, cloud consoles and software development pipelines. That shift is why “The AI Agent Governance Gap” report by US-based API management company Gravitee lands with the force of a fire alarm, not a polite policy memo.

The report cites findings from Cybersecurity Insiders showing that 71 per cent of large enterprises have already deployed AI agents with direct access to core business systems, yet only 16 per cent effectively govern that access. In other words, the corporate world has handed the keys to the machine before installing the locks.

Also Read: AI agents could become the new OTAs: What it means for Agoda and the future of travel

That framing matters enormously in Southeast Asia, where enterprises are modernising fast but often unevenly. The region’s banks, telcos, insurers, logistics giants, government-linked companies and fast-scaling tech firms are running a dense mix of legacy systems, cloud services, outsourced IT operations, and regional data flows. Add AI agents into that patchwork, and the attack surface does not merely expand. It becomes harder to even describe.

The problem is not adoption. It is architecture

Gravitee’s core argument is that the governance gap is architectural, not procedural. AI agents do not behave like human employees, and they do not fit neatly into identity and access models designed for human beings signing in from laptops. Agents operate at machine speed, can chain actions across multiple systems, inherit permissions quietly and create activity logs that are difficult for security teams to interpret in real time.

The numbers in the report are stark. It says 92 per cent of organisations lack full visibility into their AI identities, while 95 per cent doubt they could detect or contain misuse if it occurred. Nearly half of surveyed CISOs (47 per cent) say they have already seen AI agents exhibit unintended or unauthorised behaviour. That is not a theoretical risk. That is production risk wearing a name badge.

For Southeast Asia, the implications are especially sharp because many businesses operate across multiple jurisdictions with different compliance expectations. A Singapore-headquartered company may have engineering in Vietnam, a customer service operation in the Philippines, merchant relationships in Indonesia and cloud workloads spread across several regions. One poorly scoped AI agent plugged into a CRM, data warehouse, and payment workflow can turn into a compliance and security headache across borders in a matter of seconds.

Regional digitisation has created fertile ground for agent sprawl

There is a reason the region is vulnerable to this problem. Southeast Asia’s digital economy has been built on speed, interoperability and relentless integration. Super apps connect payments, food delivery, transport and lending. E-commerce platforms rely on real-time logistics and fraud tools. Banks are exposing more services through APIs.

Manufacturers are digitising procurement, forecasting and maintenance. Every one of those changes creates more structured workflows for an AI agent to enter.

And once agents arrive, they rarely stay in one lane. A sales operations agent may begin by summarising pipeline data, then request permission to update records, trigger marketing actions, and request access to billing information to answer customer queries. Over time, what began as a productivity tool becomes a semi-autonomous operator within the business.

This is where the report’s warning becomes uncomfortable. Most organisations still govern access as if the main risk is a human clicking the wrong button. But the bigger danger increasingly comes from a non-human identity making a thousand correct calls, in the wrong sequence, at the wrong scale, with the wrong level of access.

Also Read: AI agents are outpacing security: The crisis hiding in plain sight

That problem is not abstract in Southeast Asia. Regional companies often rely on managed service providers, third-party integrators and offshore development teams to stitch systems together. Credentials are shared. Service accounts linger. Documentation ages badly. In that environment, AI agents do not arrive in a pristine architecture. They arrive in a house whose wiring is already creative.

Why visibility is collapsing

The report argues that the first casualty of agentic AI is visibility. Traditional dashboards can tell security teams that an API was called or a database was queried. They are far less effective at expressing why an agent took a particular action, what chain of prompts or tool calls produced it, and whether the access was proportionate to the task.

That matters because AI agents do not simply authenticate once and sit still. They discover tools, call APIs, retrieve documents, invoke external models and sometimes delegate subtasks to other services. Each of those steps creates a miniature trust decision. According to the report, most enterprises are not instrumented to observe that flow in any coherent way.

In Southeast Asia, this visibility gap intersects with another reality: many organisations are using AI to compensate for talent shortages. Teams want automation because they are under pressure to do more with fewer specialists. That business case is real. But it also increases the temptation to grant broad permissions quickly, especially when the alternative is slower manual work.

The result is a pattern security teams know all too well: access first, governance later. Except that later, when the workflow is live, the vendor is embedded,, and the business unit is already dependent on the outcome.

The hidden boardroom risk

There is also a strategic issue here that founders and boards should not ignore. Many executives still view AI risk through the lens of model accuracy, bias or data leakage. Those issues matter, but agent governance is different. It is an operational power risk. It is the risk that software can now do things in enterprise systems, not merely analyse or recommend.

That shifts the conversation from ethics decks to control planes. If an agent can touch ERP, procurement, payroll, code repositories or customer records, then the real question is no longer whether the model is clever. The real question is whether the organisation knows what the agent is allowed to do, when, under what policy and with what audit trail.

For Southeast Asian enterprises racing to prove they are AI-ready, this is where the story gets serious. The most immediate threat may not be a headline-grabbing model failure. It may be a quiet overreach: an agent with too much access, too little monitoring and too many connected systems.

The coming divide

The Gravitee report points towards a coming divide in enterprise AI. On one side will be organisations that treat agents as first-class operational actors requiring identity, authorisation, monitoring and lifecycle management. On the other hand, there arehand, there those who continue to treat agents as convenient add-ons to existing software.
The first group will move more slowly at the beginning and much faster later. The second group will look agile until something breaks.

Also Read: Agentic AI is powerful, but power isn’t product-market fit

In Southeast Asia, where growth markets often reward speed and execution, that distinction could become a competitive fault line. The winners will not simply be the companies with the most AI agents. They will be the ones who know exactly what those agents are doing, what they can touch and how quickly their access can be changed or revoked.

The age of AI agents in the enterprise has already begun. The age of controlling them has barely started. That, as the report makes clear, is the real story.

The post AI agents are already inside your systems, but who’s controlling them? appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *