
Every technology boom produces its own version of unauthorised adoption. Cloud had it. SaaS had it. Messaging apps had it. Now, AI agents are doing it at machine speed.
That is one of the most explosive threads running through the US-based API management company Gravitee’s “AI Agent Governance Gap” report. It argues that the real AI security problem is no longer hypothetical misuse but ungoverned deployment already underway within the enterprise.
The report says 75 per cent of organisations have discovered unsanctioned AI tools already running in their environments. Gravitee’s own survey data adds another damning metric: only 14.4 per cent of organisations have achieved full IT and security approval for their entire agent fleet. If shadow IT used to creep in through departmental software subscriptions, shadow AI is charging in through copilots, browser tools, API wrappers, open-source models and workflow automations that can be spun up in days.
Also Read: AI agents are already inside your systems, but who’s controlling them?
Southeast Asia is especially exposed because its digital businesses run on speed, improvisation, and distributed decision-making. That is not a criticism. It is part of why the region produces agile startups, resilient consumer platforms, and scrappy enterprise teams. But the same traits that drive innovation also make it easy for AI tools to bypass official channels. A product lead in Jakarta, a growth team in Manila, or a developer unit in Ho Chi Minh City does not need a six-month procurement cycle to start using AI. They need a company card, an API key, and a reason.
Friction is the mother of shadow adoption
One of the most useful insights in the report is brutally simple: shadow AI is a rational response to organisational friction. The white paper quotes journalist Jane Wakefield, who says, “Business leaders want to move quickly with AI. However, with different tools, different models, and different rules, it can be hard to have a clear picture of where data is going or how decisions are being made.”
That line lands because it describes a very familiar corporate pattern. Approved tools are slow to procure. Security review takes time. Legal wants data clauses. Compliance wants records. The business unit wants results this quarter. So the team finds a faster route.
This is not usually sabotage. It is an incentive design. Employees are judged on output, speed, and innovation. If the approved path to AI is painful, the unapproved path becomes attractive.
In Southeast Asia, that logic is amplified by competitive pressure. Startups are trying to conserve headcount while increasing output. Large enterprises are under pressure to automate customer support, sales operations, fraud detection, procurement and internal knowledge work.
Regional conglomerates are pushing digital transformation into subsidiaries with very different levels of technical maturity. In all of those environments, an AI tool that promises faster decisions or lower labour intensity can spread before governance catches up.
The real risk is not the chatbot. It is the connection
The public conversation around shadow AI often gets stuck on employees pasting sensitive text into consumer chatbots. That is a problem, but it is no longer the whole problem. The bigger enterprise risk emerges when unsanctioned AI tools are connected to internal systems.
An AI assistant with read access to a Slack workspace is one thing. An AI agent with delegated access to a CRM, document repository, billing dashboard, or cloud admin console is something else entirely. Once those connections exist, shadow AI stops being a data leakage issue and starts becoming an operational control issue.
Also Read: When tools start acting for you: The hidden cost of shadow IT
The report warns that these tools can arrive with embedded credentials or elevated system access that security teams do not even know exists. That observation should resonate across Southeast Asia, where many companies depend on external agencies, implementation partners and loosely documented integrations. In fast-moving businesses, access is often granted to “just get it working”. Later, nobody is entirely sure which tool is calling what.
That creates a dangerous asymmetry. Business teams see productivity gains immediately. Security teams see the underlying exposure only after an incident, an audit finding or a suspicious log pattern. By then, the tool may already be part of a critical workflow.
The region’s startup culture makes this even harder to police
For a pan-Asia tech audience, the uncomfortable truth is that startup culture itself can nurture shadow AI. Founders prize initiative. Engineers are rewarded for solving problems without bureaucracy. Growth teams experiment first and document later. That is often a strength. It is also how invisible dependencies get created.
Imagine a sales team using an AI agent to summarise leads, enrich account data and draft outreach. Then it gets connected to HubSpot or Salesforce. Then it gains access to internal pricing sheets. The customer success team then follows the same workflow. Six months later, the company has an undeclared AI layer sitting between staff and core customer systems.
Nothing about that progression sounds dramatic while it is happening. That is precisely why it is dangerous.
The problem is even more acute in Southeast Asia because many companies are managing multilingual operations, fragmented vendor stacks, and regional expansion simultaneously. A single shadow AI deployment can touch data subject to Singapore’s PDPA, Indonesia’s personal data law, Vietnam’s privacy rules or sector-specific controls in financial services. The compliance exposure is no longer local. It is distributed.
Security teams are losing the race to discover what exists
Gravitee’s broader research found that 88 per cent of organisations confirmed or suspected security incidents this year were related to agent security. Read alongside the 75 per cent shadow AI figure, the message is blunt: enterprises are not merely struggling to secure authorised AI. They are struggling to discover unauthorised AI before it matters.
This is why “approval gap” may become one of the most important phrases in enterprise AI. Many governance discussions focus on policy design. But before policies can be enforced, organisations have to know which agents, tools and workflows are already active. That sounds basic. It is not.
Also Read: AI systems as policy executors without policy clarity
Discovery is hard because AI adoption is now decentralised. Teams can access public models directly, use embedded AI features in SaaS products, deploy open-source models on cloud infrastructure or build wrappers around multiple providers. Some tools look like standalone apps. Others are merely features hiding inside software the company already uses. The sprawl is astonishingly easy to underestimate.
The cost of being slow is now higher than the cost of being wrong
There is a strategic twist here that many leaders have not internalised. In the past, central technology teams could often slow adoption in the name of control. In AI, that strategy backfires. If the secure path is significantly slower than the insecure path, business units will route around it.
That means the winning governance model is not simply stricter. It has to be faster, clearer and easier to use than shadow alternatives. This is particularly relevant in Southeast Asia, where businesses operate in highly competitive markets with thin margins and relentless pressure to move. Governance that adds friction without adding usable infrastructure will be ignored.
The lesson from the report is not that organisations should crack down theatrically on every unauthorised tool. They need to make compliant AI access genuinely convenient. If official channels are slow, shadow AI will keep winning.
The next era of enterprise AI security will not be defined by who writes the toughest policy. It will be defined by who builds the fastest trustworthy route from business need to approved deployment. In a region that values execution, that may be the only governance model with any chance of survival.
The post It’s not the chatbot but the access: Why AI agents are the real threat appeared first on e27.
