
AI security isn’t just about building guardrails to prevent a future iRobot or Skynet scenario. Many people have debated those possibilities, from Isaac Asimov to Arthur C. Clarke to today’s leading thinkers. That’s not the angle I want to dwell on here.
Instead, after reading this recent article from Think China I was struck by the sovereignty aspect of AI.
The piece warns that Southeast Asia risks being locked into ecosystems that could undermine the region’s independence. History shows that picking sides rarely leads to lasting sovereignty, and the concerns raised by regional leaders deserve close attention.
AI as a sovereignty issue
In the rush to deploy AI systems, governments are beginning to recognise the risks of concentration. If critical services, from healthcare to logistics to public administration, are built entirely on a few dominant platforms, national resilience becomes fragile. As with land, food, and water security, AI security may soon be a matter of sovereignty.
Some may call this scaremongering, since today’s AI providers are focused on growth and customer acquisition they wouldn’t possibly consider restricting services in a competitive environment. Yet the risk remains: if those providers ever switch off their systems, willingly or under external pressure, the impact could be devastating. Imagine public services grinding to a halt, or supply chains breaking down.
To understand whether such concerns are justified, it’s useful to look at parallels in the global system today. These examples aren’t predictions, but they are observations I have made that illustrate why dependence on concentrated power is risky.
Lessons from global systems
- The WTO and rule-based order
The World Trade Organisation only works when all players respect its rules. When the U.S. blocked judge reappointments to the WTO Appellate Body, the system was effectively paralysed. Some viewed this as a deliberate attempt to bypass rules that no longer suited the leading trading nation. The parallel for AI is clear: global frameworks can fail if dominant players choose not to participate.
- The Trans-Pacific Partnership (TPP)
The U.S. withdrew from the TPP after years of negotiation. The remaining nations signed the CPTPP, but without many of the U.S.-driven provisions. For smaller nations, it showed how quickly alliances can shift, and how reliance on one or two major players can leave others exposed. The same dynamic could emerge if AI platforms consolidate too much power.
Also Read: Enterprise AI adoption: Context, not cost, defines deployment
- Financial sanctions
Sanctions have become a common tool in global diplomacy. Supporters argue they uphold international law and human rights. Critics counter that they can be instruments of coercion, placing disproportionate pressure on ordinary citizens rather than political leaders. For nations dependent on financial systems controlled by a few blocs, sanctions reveal the limits of sovereignty. The lesson for AI is similar: dependence on external platforms can leave countries vulnerable to outside leverage.
- Frozen assets
The freezing and proposed repurposing of Russian state assets has sparked heated debate. Western governments frame it as lawful enforcement for accountability and reparations, while others see it as a troubling precedent. For sovereign nations, the question is: how secure are your assets if global systems can be reshaped during political disputes? In the AI context, the same question applies to data, algorithms, and cloud access.
- Media and social platforms
TikTok bans highlight how governments are weighing data security against open market access. While officially justified on national security grounds, they also reflect broader anxieties about who controls the digital discourse. Nations are left to weigh both the benefits of open platforms and the risks of relying too heavily on services outside their regulatory reach. The same dilemma will play out even more starkly with AI systems.
- The BRICS response
The expansion of BRICS is part of a wider push for multipolarity. While still evolving, it signals a desire among nations to balance the dominance of existing blocs. For AI, the implication is that countries will seek their own capacity rather than rely wholly on external providers.
Also Read: Why AI inclusion matters: Lessons from Mongolia’s Girls Code movement
Building resilient AI security
Taken together, these examples show why it’s reasonable to question how we build AI systems. Nations need to ask: how can we benefit from the efficiencies and services AI delivers while protecting sovereignty and resilience?
Legislation is important, but so is investment in domestic capabilities: chip production, data centres, research and development, and regulatory frameworks that ensure independence. Guardrails that govern AI reasoning and transparency matter, but without control over infrastructure and assets, those guardrails could be changed or removed by foreign entities.
In short, AI security is not only about preventing harmful outputs. It is about ensuring that the systems we increasingly depend on serve national interests and remain under sovereign control.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.
Image courtesy: Canva Pro
The post Artificial Intelligence as a question of national security and independence appeared first on e27.
