Posted on Leave a comment

From code to carbon: How Asia can harness AI agents without harming people or the planet

Across Asia, a quiet revolution is underway. Banks are piloting AI agents to triage customer queries. Manufacturers are wiring factories with autonomous “co‑pilots” that watch sensor data and adjust production lines in real time. Governments are experimenting with digital assistants to guide citizens through permits and benefits.

These systems look like chatbots on the surface. Under the hood, they are something more consequential: AI agents, software entities that can perceive, plan, and act toward goals with less and less human supervision. They can call other tools, talk to other agents, and make decisions on our behalf.

For Asian policymakers, investors, and executives, the question is no longer whether these systems will arrive. They are already here. The real question is, can the region scale AI agents in ways that cut emissions, create good jobs, and strengthen social resilience—or will we simply import a new layer of risk and dependency?

This article looks at the environmental and social implications of AI agents in Asia and asks a more important question: can the region scale these systems in ways that reduce risk, strengthen resilience, and support long-term sustainability rather than simply accelerating automation?

The new machine room: Energy, water, and materials

Most conversations about AI agents focus on productivity. Far fewer acknowledge what it takes to keep them running.

An agentic system does not stop at one reply. It might:

  • Call a language model dozens of times in a single task.
  • Consult search, databases, or corporate systems.
  • Coordinate with other agents, debating and refining answers in the background.

Each of those steps consumes computing cycles, which in turn draw electricity and require cooling.

Also Read: China blocks Meta’s AI bet on Manus: What it means next

Recent sustainability studies of AI infrastructure paint a stark picture. Training frontier models already uses massive energy and water; inference—the day‑to‑day running of models—now represents a growing share of the footprint as usage explodes. Agentic architectures amplify this trend. They stretch interactions over time and multiply calls by moving from individual queries to long workflows.

In Asia, that matters for three reasons.

  • First, much of the region’s power is still fossil‑heavy. Adding large, always‑on AI agent workloads to grids in countries where coal remains dominant risks locking in additional emissions just as climate commitments are tightening.
  • Second, water stress is rising. Many hyperscale data centres rely on water‑based cooling. Locating agent‑heavy workloads in already stressed basins—from northern China to parts of India and Southeast Asia—raises real questions about trade‑offs between digital ambitions and local water security.
  • Third, hardware and materials are not neutral. Manufacturing the chips and networking gear that underpin AI agent platforms carries a global footprint, from mining to fabrication to e‑waste. Asia sits at several points in this chain—as producer, user, and often as the final destination for discarded electronics.

The uncomfortable truth is that energy‑blind and environment‑blind AI agents could quietly erode the very sustainability gains they are supposed to enable.

The Social Ledger: Work, Inequality, and Trust

Environmental impacts are only half the story. AI agents also reshape social and economic landscapes.

Also Read: Bridging the last mile: How AI can transform agriculture, health, and education in SEA

Work redesigned—from co‑pilot to overseer

Agentic AI changes not only which tasks can be automated, but also how work is organised. Instead of replacing an entire role, agents increasingly:

  • Draft and refine content before a human sees it.
  • Monitor activity streams and flag anomalies.
  • Allocate work among humans based on rules the system learns over time.

In Asian service sectors—business‑process outsourcing, call centres, back‑office operations—this is already visible. Some tasks become easier or faster; others become more fragmented, more closely monitored, and less meaningful.

Research on automation and well‑being suggests that this kind of partial automation can create a peculiar mix of relief and strain. Routine burdens shrink, but so does autonomy. Workers may become supervisors of systems they neither understand nor control, bearing responsibility without agency.

The region faces additional challenges:

  • High shares of informal employment mean many workers who feel AI pressure have little social protection.
  • Power imbalances between multinational clients, local firms, and workers can turn “AI augmentation” into a vehicle for intensified surveillance.

Without deliberate choices, AI agents could widen gaps between high‑skill and low‑skill workers–between those who design systems and those who are managed by them.

Also Read: Creating a safe digital world: Protecting kids from cyber crimes and preventing cyberbullying

Social intelligence at scale

Most safety discussions have focused on single models hallucinating. Multi‑agent systems introduce a different class of risk. When agents interact, they can:

  • Reinforce one another’s errors in long chains of reasoning.
  • Converge rapidly on misleading narratives.
  • Exhibit emergent “group behaviours” that their designers did not anticipate.

In markets where online information is already polarised or polluted, this matters. If news feeds, moderation systems, and political campaigns lean on swarms of semi‑autonomous agents, errors and biases can propagate faster and further. For countries balancing digital growth with fragile social contracts, that is not a theoretical concern.

Inequality of access and dependency

Finally, there is the question of who gets to own and steer these systems.

At present, the most capable agent platforms are being developed and hosted by a small set of global firms. Asian companies and governments increasingly rely on these platforms for critical functions—from software development and cloud operations to citizen services.

This raises familiar questions:

  • How easy is it to switch providers if terms become unfavourable?
  • Who sets the rules for data use, logging, and model updates?
  • How can public regulators inspect systems that are only partially under their jurisdiction?

At the same time, there is a real opportunity for Asia’s own innovators—especially in countries such as Vietnam, Indonesia, and India—to build lightweight, local‑language agent frameworks tuned to regional needs. Whether that opportunity is seized or squandered will depend on choices taken now about standards, open tooling, and capacity‑building.

Also Read: How to navigate through the vast opportunities in the finance industry

Thinking Like the Earth: A different starting point

In our book Thinking Like the Earth: How Synthetic Intelligence Saves Our Planet, we argue that the central question is not whether AI can be “made green” through efficiency tweaks. The deeper question is whether we design AI systems—including agents—to behave as if they were part of living, interdependent systems rather than abstract optimisation engines.

That implies three shifts in mindset:

  • From throughput to sufficiency. Not every task that can be automated should be. The right metric is not “maximum usage” but “enough usage to achieve social and ecological goals.”
  • From isolated tools to ecosystems. AI agents sit within networks of people, institutions, infrastructures, and environments. Governance must take that whole system into account, not just the software.
  • From global templates to local wisdom. Asia’s ecological, cultural, and economic diversity is an asset. AI governance that ignores this richness will fail in practice.

Also Read: How to navigate through the vast opportunities in the finance industry

Building practical governance for AI agents

The challenge for Asia is not whether AI agents will be adopted, but whether governance can keep pace with deployment.

This means building practical systems for accountability before large-scale adoption becomes irreversible.

Organisations need clearer standards around environmental reporting, human oversight, decision traceability, and vendor accountability.

Regulators need tools that move beyond abstract principles and into operational questions: where agents are being deployed, how much infrastructure they consume, and how failures are handled when systems make decisions at scale.

Without this, governance remains reactive instead of preventative.

A regional hub for AI environmental sustainability standardisation

Asia needs a voice in the way global environmental standards for AI are designed. If the region simply imports metrics, labels, and reporting formats devised elsewhere, two things may happen:

  • Local environmental priorities—such as river health, air quality in dense cities, or climate resilience in deltas—will be underweighted.
  • Smaller firms and public agencies may be overwhelmed by compliance demands that were never tailored to their context.

Through the Sustainable AI Portal, we are trying to work with partners to:

  • Pilot energy‑aware and water‑aware metrics for AI workloads, including agentic systems, in real data‑centre and enterprise settings.
  • Contribute Asia‑specific perspectives to ongoing discussions on AI sustainability in international standard‑setting bodies.
  • Bring practical insights into multilateral venues, including the UN Global Dialogue on AI Governance, as it begins to grapple with environmental dimensions of AI.

The goal is not to duplicate what others are doing, but to ensure that Asia’s experiences and experiments shape global norms from the outset.

Also Read: “The risk doesn’t go away; execution decides everything”: Altara’s Dave Ng

Open tools and science‑for‑sustainability

Finally, the Portal acts as an open infrastructure for researchers, practitioners, and communities:

  • Shared case studies connect AI‑agent scenarios to SDG priorities such as climate adaptation, sustainable agriculture, and resilient cities.
  • Collaborative “policy labs” bring together engineers, environmental scientists, lawyers, and community representatives to design governance interventions around concrete deployments.

This combination mirrors a broader conviction from Thinking Like the Earth: synthetic intelligence will only help save the planet if it is developed as part of a wider commons—of data, knowledge, and responsibility.

What leaders in Asia can do now

The environmental and social impacts of AI agents are not an argument for paralysis. They are a call for more grounded ambition.

For policymakers:

  • Treat AI agents as infrastructure, not just apps. Require basic environmental and social risk assessments before large‑scale deployments in public services.
  • Support regional governance and research hubs—including in emerging centres such as Ho Chi Minh City, Jakarta, and Bengaluru—that can study impacts locally and feed into global processes such as the AI Dialogue.

For executives:

  • Ask hard questions about the energy, water, and labour implications of agent deployments, not just productivity gains.
  • Build internal capability to instrument and monitor systems, including “kill switches” and clear lines of accountability when agents act unexpectedly.

Also Read: The one-person company was always possible. AI agents make it probable

For academics and civil society:

  • Work across disciplines—computer science, environmental science, law, social sciences—to build a realistic picture of how agentic AI is reshaping specific sectors and communities.
  • Cross-border collaboration between researchers, regulators, and industry leaders will be essential to building governance models that reflect real operating conditions rather than imported assumptions.

Asia stands at a fork in the road. It can become a passive consumer of agent technologies designed elsewhere, absorbing their environmental and social costs. Or it can lead to showing how AI agents, governed wisely and designed with the Earth in mind, might actually help the region—and the planet—thrive.

The difference will not be determined by a line of code in Silicon Valley or Shenzhen. It will be shaped in ministries and boardrooms, universities and communities across Asia, by leaders willing to ask a harder question: not “How fast can we deploy agents?” but “What kind of future do we want them to build with us?”

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

Image Credit: Noah Buscher on Unsplash

The post From code to carbon: How Asia can harness AI agents without harming people or the planet appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *