Posted on

The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis? — Part 1

Artificial intelligence (AI) is expanding at breakneck speed, presenting a paradox for global energy systems. On one hand, AI-driven innovations promise efficiency gains in renewable energy management and smarter grids. On the other, the surging power demands of AI threaten to strain electricity infrastructure and increase reliance on fossil fuels.

Current projections indicate data centres — the digital fortresses powering AI — could consume over 1,000 TWh of electricity by 2026, roughly double their 2022 usage. (For perspective, that’s comparable to Japan’s annual power consumption, or about 90 million US homes.)

In the European Union alone, data centre energy use is forecast to reach 150 TWh by 2026, ~ four per cent of EU demand. Gartner even predicts that 40 per cent of existing AI data centres will hit power capacity limits by 2027, underscoring the urgent infrastructure challenge.

This surge places immense pressure on power grids. Cutting-edge AI models require enormous energy: Training a single large language model (LLM) like OpenAI’s GPT series can devour tens of gigawatt-hours of electricity . Some hyper-scale AI data centres already draw 30-100 megawatts each, and future facilities may exceed 1,000 MW (1 gigawatt) — about the output of a large power plant .

One industry analysis notes tech giants are pursuing “gigawatt-scale” data centre campuses to support AI workloads . By 2030, Microsoft and OpenAI’s planned “Stargate” supercomputer could require an astonishing five GW of power.

In response, tech companies are exploring diverse energy strategies. Google, for instance, is investing in advanced nuclear power: it signed a deal to purchase energy from small modular reactors (SMRs), aiming to add 500 MW of carbon-free power by 2030.

Microsoft is turning to nuclear with the Three Mile Island nuclear power plant deal, Amazon, and Meta are turning to conventional power plants — in some regions, new natural gas-fired generators — to guarantee reliable juice for AI data centres, a strategy supported by utilities. In Wisconsin, regulators approved a US$2 billion gas plant deemed “critical” for Microsoft’s new AI hub.

These moves underline a hard truth: renewables alone can’t yet meet AI’s ravenous base-load demand, prompting a dual-track energy race between carbon-free solutions and fossil fuels.

This brings up pressing questions for business leaders:

  • Will AI ultimately drive sustainability gains or an energy crisis?
  • How are regional disparities and geopolitics shaping AI’s energy footprint?
  • What technological breakthroughs could enable sustainable AI growth?
  • And how should corporate strategy adjust to balance AI’s benefits against its energy and carbon costs?

This three-part guide examines the forces at play — from data centre trends and energy innovations to policy and geopolitical factors — to help corporate decision-makers navigate AI’s energy revolution.

The goal: understand the macro and geopolitical impacts of AI’s energy consumption, and chart a course that leverages AI’s power responsibly and sustainably.

The energy cost of AI: Hard truths and hidden opportunities

Global data centre electricity consumption reached an estimated 460 TWh in 2022, with AI and cryptocurrency operations accounting for roughly 14 per cent of that load, according to the International Energy Agency (IEA).

Now AI is pushing those numbers dramatically higher. Projections show data centres worldwide could consume over 1,000 TWh by 2026 — roughly doubling in just four years. By 2030, some forecasts see a further 160 per cent increase in data centre power demand driven by AI.

Also Read: Eco-investing: Driving change through climate technology and strategic finance

This growth is concentrated in key AI hubs and “cloud clusters” with serious consequences for local grids:

  • In Northern Virginia’s famed “Data Centre Alley,” the massive concentration of servers has led to power quality issues. The region now experiences voltage distortions four times higher than the US average, raising the risk of appliance damage and even fires for surrounding communities. Utilities warn that traditional grid infrastructure is straining to keep up with the load.
  • In central Ohio, data centre capacity has quadrupled since 2023, consuming so much electricity that utility AEP had to halt new data centre connections, despite a 30 GW queue of projects waiting to plug in. Simply put, the grid can’t be expanded fast enough to accommodate the sudden surge in demand.
  • Ireland faces a similar crunch — by 2026, data centres are projected to gobble up 32 per cent of Ireland’s electricity. Dublin’s metro grid is so stressed that the government imposed a moratorium on new data centres in the area, shifting over US$4 billion in planned investments to other countries.

The energy intensity of AI is a key reason demand is outpacing capacity. A few eye-opening facts illustrate the scale:

  • Training a single large AI model can consume enormous amounts of electricity. For example, training ChatGPT/GPT-3 (with 175 billion parameters) is estimated to use on the order of 1-1.3 GWh (gigawatt-hours) of energy — roughly the yearly electricity usage of over 1,000 US homes. And that’s for one training run. Newer models like GPT-4 are even more power-hungry — estimates suggest on the order of 50-60 GWh for a full training cycle, which would be enough to power ~4,500 homes for a year (and emits tens of thousands of tons of CO₂). In other words, one large AI model’s training = years of household electricity.
  • Running AI models (inference) is also energy intensive. AI queries consume about 10× more electricity than a typical Google search. Every time you ask ChatGPT a question, a network of GPUs fires up, drawing far more power than a standard web search. Multiply this by millions of queries, and the energy adds up fast. Microsoft and Amazon have responded by securing huge dedicated power supplies for their cloud AI operations — on the order of 500 MW to 1,000 MW per data centre campus — to ensure they can handle the surging demand. For perspective, a single 1,000 MW data centre campus could consume as much power as 750,000 homes.
  • The sheer consumption of top tech companies is staggering. In 2023, Microsoft and Google each used ~24 TWh of electricity — more power than entire countries like Iceland, Jordan, or Ghana consume in a year. This puts their usage above that of over 100 nations. While these firms have aggressive renewable energy programs, the scale of their energy draw highlights how big the AI computation boom has become.
  • The cloud giants are investing heavily to keep this sustainable. Microsoft recently announced a US$10+ billion deal with Brookfield to develop 10.5 GW of new solar and wind farms by 2030 — an unprecedented corporate clean power purchase aimed squarely at running its AI and cloud data centres on carbon-free energy. Amazon and Google are similarly pouring funds into renewables and even experimental technologies (like advanced geothermal and batteries) to offset their growing AI footprint.

Despite these efforts, power constraints are emerging as a growth limiter for AI. Industry analysts warn that in the next few years, many data centre operators (especially those not backed by big tech) may find it difficult or prohibitively expensive to get the electricity they need.

Gartner projects that by 2027, 4 in 10 AI data centres worldwide could hit their power capacity ceiling, meaning their expansion will be stalled by energy shortages. For enterprises, this could translate to slower cloud rollouts or higher costs as energy prices rise.

However, within this hard truth lies a hidden opportunity — AI itself can help solve the energy challenge. As we’ll explore, the same technology driving up consumption can also drive greater efficiency and new solutions, if wielded wisely.

Also Read: The key to tackling climate change: Electrify shipping

Comparing AI models: Power hunger from GPT to KNN

Not all AI is equally power-hungry. There is a vast gap in energy consumption between large, state-of-the-art AI models and more traditional algorithms. Understanding this spread can help leaders choose the right AI tools for the job — balancing capability and cost. The table below compares examples of AI models:

Table: Energy requirements for training various AI models range over orders of magnitude. Cutting-edge deep learning models (top rows) consume enormously more energy than smaller neural nets or classical machine learning methods (bottom rows). Choosing a right-sized model can avoid wasting power.

As the table shows, today’s largest AI models (like GPT-3/4) dwarf earlier AI in power needs. Training GPT-4 can use about 50,000× more energy than training a typical convolutional neural network (CNN) like ResNet-50 used for image recognition.

And an old-school algorithm like k-nearest neighbors (KNN) or an ARIMA forecast model might use a million-times less energy — essentially negligible in comparison.

This doesn’t mean companies should avoid large AI models altogether; rather, it underscores the importance of right-sizing AI to the task. You don’t always need a billion-parameter model if a simpler one works — and the energy (and cost) savings from a leaner approach can be huge.

Key takeaway: AI’s energy footprint isn’t uniform. Generative AI and other complex models can be incredible but come with extreme energy costs.

Business leaders should evaluate whether a smaller, more efficient model could meet their needs. In many cases, optimized or “distilled” models, or running AI at the network edge, can deliver acceptable performance while using a fraction of the power. This efficiency-centric approach to AI adoption will become increasingly vital as energy pressures mount.

Fossil fuel lock-in vs a nuclear renaissance

The tug-of-war between AI’s energy demand and clean energy supply is pushing companies down two very different paths. On one side, some firms and regions are doubling down on fossil fuels to keep the lights on for AI. On the other, there’s a growing movement toward a nuclear revival (along with renewables) to power AI sustainably.

Also Read: What does Trump mean for SEA climate scene?

On the fossil fuel front, oil and gas producers see AI’s rise as a new source of demand for hydrocarbons. BP’s CEO Murray Auchincloss, for example, predicts AI’s infrastructure build-out could drive an extra 3-5 million barrels per day of oil demand growth through the 2030s, as data centres and associated supply chains consume more energy (fuel for generators, diesel for construction, etc.). Likewise, Shell’s latest Energy Security Scenarios project natural gas demand reaching 4,640 billion cubic meters annually by 2040, partly to fuel backup generators for data centres and provide grid stability in an AI-enabled economy.

These trends raise concerns that AI could inadvertently lock in a new wave of fossil fuel dependence right when the world is trying to decarbonise. For instance, in the US, some utilities are proposing 20+ GW of new gas-fired power plants by 2040largely to meet data centre growth.

This runs directly against climate goals — building gas infrastructure that could last 40-50 years to serve what might be a short-term spike in AI-related demand.

Conversely, a potential “nuclear renaissance” is being driven by AI’s 24/7 power needs and corporate clean energy pledges. Nuclear power offers steady, carbon-free electricity that is highly appealing for always-on AI workloads. We’re seeing concrete steps in this direction:

  • Microsoft is investing US$1.6 billion to help reopen the dormant Three Mile Island nuclear plant in Pennsylvania, aiming to secure 24/7 carbon-free power for its AI data centres by 2028. This would repurpose an existing nuclear reactor to directly feed Microsoft’s cloud operations — a bold bet on nuclear as a reliable green energy source for AI.
  • Amazon and Google have each committed at least US$500 million in financing to startup companies developing small modular reactors (SMRs). Their goal is to have about 5 GW of new nuclear capacity from SMRs online by the mid-2030s. Google’s agreement with Kairos Power, for instance, targets the first SMR operational by 2030. If successful, these would be game-changers: modular reactors could be built near data centres to provide dedicated clean power.
  • In Europe, policymakers are increasingly viewing nuclear as essential for meeting AI’s power demands. The EU projects that nuclear-powered data centres (where data centres are co-located with nuclear plants or dedicated reactors) could supply 15-25 per cent of the new electricity needed for AI and digital growth through 2030. France and the UK have floated incentives for data centre operators to hook into existing nuclear plants, while countries like Romania and Estonia are partnering on SMR deployment with an eye toward tech sector needs.

The contrast is striking: Will the AI era deepen our fossil fuel dependence or accelerate the shift to alternative energy?

In practice, both are happening — but the balance could tip one way or the other based on economics and policy. Natural gas plants currently often win on cost and speed (a gas turbine can be built faster than a nuclear plant and is a proven solution to instantly boost capacity).

Indeed, “the only concrete plans I’m seeing are natural gas plants,” notes one energy consultant about data centre expansions. Yet, as carbon costs rise and modular nuclear tech matures, nuclear and renewables could prove the more attractive long-term play.

For corporate leaders, this means energy strategy is becoming inseparable from AI strategy. Companies may need to directly invest in energy projects (like Microsoft’s and Google’s deals) to ensure their AI ambitions have a viable power supply. Those that succeed in securing reliable, clean energy will not only meet sustainability goals but also gain an operational advantage (avoiding the risk of power constraints slowing their AI deployments).

This is part one of a three-part series exploring AI’s energy impact.

Part two of this series examines how AI can enhance energy efficiency and optimise grid management to address this challenge.

This article was originally published here and co-authored by Xavier Greco, Founder and CEO of ENSSO.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

Image courtesy: DALL-E

The post The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis? — Part 1 appeared first on e27.

Posted on

Building smart: A tech founder’s guide to the semiconductor supply chain revolution

The digital era is underpinned by a technology so small it is nearly invisible: the semiconductor. These tiny chips power the devices and systems that define modern life, from smartphones and electric vehicles to AI servers and medical imaging equipment.

As the global demand for semiconductors grows, the race to build resilient, agile, and forward-looking supply chains has never been more critical. For tech founders, especially in Southeast Asia (SEA), understanding this ecosystem is not just strategic—it is existential.

According to Source of Asia, SEA has carved a significant niche in the global semiconductor value chain. While front-end fabrication remains dominated by Taiwan and South Korea, the region has emerged as a vital centre for back-end processes: Assembly, Testing, and Packaging (ATP). These steps are essential to the chip lifecycle and offer enormous value for tech companies seeking reliable, cost-efficient solutions.

Countries such as Malaysia and Vietnam are rapidly becoming semiconductor hotspots. This is driven by low operational costs, supportive government policies, and modern infrastructure. These advantages, coupled with a skilled workforce, have made the region attractive to multinationals and startups alike.

As industries from automotive to telecommunications deepen their reliance on semiconductors, SEA’s role in maintaining global supply chain stability continues to grow. This makes it an ideal launchpad for startups aiming to scale amid geopolitical flux and accelerating digital transformation.

Also Read: GridCARE raises US$13.5M led by Xora to fuel AI’s energy needs

Navigating the semiconductor age demands more than just sourcing components. It requires forming the right strategic partnerships—those that bring not only capital but also technical expertise, global reach, and shared vision.

Infineon Technologies exemplifies such a partner. As a global semiconductor leader, Infineon is committed to driving decarbonisation and digitalisation through power systems and IoT solutions. Their products support everything from clean mobility to smart energy systems. With over 58,000 employees across more than 100 countries, Infineon is not just delivering chips; they are engineering a better tomorrow.

Partnerships like these are crucial for tech founders building hardware or AI-enabled platforms. Having access to high-quality semiconductor technologies, paired with expertise in sustainability and systems integration, can provide a competitive edge in both product performance and market perception.

The capital conduit: Investing in innovation

While tech is the engine, capital is the fuel. Vertex Ventures Southeast Asia and India (VVSEAI) has long recognised this dynamic. The fund has helped build companies such as Grab and PatSnap by not just writing cheques, but also providing strategic counsel, talent access, and introductions to customers and partners across the globe.

For tech founders in the semiconductor-adjacent space—whether in manufacturing, logistics, or AI—VVSEAI offers a unique combination of regional insight and global connectivity. With a presence in every major innovation hub through the Vertex Global Network, their teams share learnings across borders to help startups scale faster and smarter.

As chips grow more complex and demand for efficiency spikes, AI becomes indispensable in semiconductor operations. Innowave Tech is pioneering this shift. The company’s industrial AI solutions address challenges across predictive maintenance, quality assurance, and process automation. By replicating human judgment through edge AI and deep learning, Innowave helps manufacturers streamline operations and reduce downtime.

Also Read: Singapore’s AI ambitions face crucial test amid economic and talent pressures

One of Innowave’s most powerful contributions is in supply chain optimisation. By digitising material flows and applying analytics to forecasting and logistics, they create agile networks that respond swiftly to market changes—a capability that has become mission-critical in today’s unpredictable geopolitical climate.

Future-proofing through knowledge

Understanding the intricacies of semiconductor supply chains is no longer the domain of engineers and operations managers alone. Founders must grasp the broader implications—from sustainability and digital twin adoption to geopolitical risk and capital flow.

This is why the panel discussion “Building in the Semiconductor Age: What Tech Founders Need to Know About Supply Chains, Partnerships, and Strategic Positioning” is unmissable.

Join industry leaders Teong Wei Tan (Infineon), Chan Yip Pang (Vertex Ventures), and Jinsong Xu (Innowave Tech) as they decode the future of semiconductors and what it means for entrepreneurs.

📅 Echelon Singapore 2025
📍 Suntec Singapore
🗓 June 10–11
🕥 Panel: June 11, 10:30 AM – 11:20 AM at Forge Stage

Secure your seat now and future-proof your startup for the semiconductor-powered decade ahead.

The post Building smart: A tech founder’s guide to the semiconductor supply chain revolution appeared first on e27.

Posted on

Is AI making it harder for tech startups to survive?

Imagine a Singaporean biotechnology startup leveraging artificial intelligence (AI) in diagnostic solutions to determine the most effective cancer treatments for more rapid recovery. Meanwhile, a small agritech company might deploy AI-powered drones to enhance irrigation, pest management and crop health monitoring in rural India. 

AI innovations have rapidly become a non-negotiable driver of success in technology startups, particularly across Asia. Yet, despite these innovations’ ability to streamline functions, boost invention and personalise customer experiences, technology startups face several challenges that can hinder achievement.

Challenges faced by tech startups in the AI age

Despite AI solutions having the power to transform technology startups, integrating them isn’t always straightforward. These are some of the greatest integration difficulties in startup culture.  

Talent acquisition and up-skilling

The skills required for AI-influenced jobs change 25 per cent faster than jobs less impacted by AI, meaning workers must continuously up-skill to stay relevant. Compensation for positions relying on AI expertise also tends to be 25 per cent higher, incentivising professional development and highlighting the importance of AI to companies.

As it stands, talent availability is lacking. Saikat Banerjee — a leader at Bain & Company’s AI, Solutions, and Insights firm — says there will be 1.5 to 2 times more AI-related job openings than there are professionals to fill them by 2027. 

According to an MIT Sloan study, 85 per cent of entrepreneurs agree they critically need an AI strategy, whether to seek new opportunities, encourage groundbreaking product development or gain deeper insight into the customer journey. 

Data collection and governance 

Because startup companies are in their infancy, they do not always have relevant data points to train AI models. Data quality and diversity are also crucial. Otherwise, inputs may result in inaccuracies, biases and inadequate predictions, with serious consequences in health care or financial settings. 

Data privacy regulations are on the rise throughout Asia. For instance, Korea’s Personal Information Privacy Commission (PIPC) has issued rules allowing consumers to ask about AI decision-making, such as how it makes certain hiring decisions. Hong Kong also encourages responsible AI use in businesses by promoting fairness and transparency.

Also Read: How Hasan Venture Capital uses AI to build an ethically grounded investment future

Infrastructure and computing power

Technology startups must contend with the high costs of cloud computing and specialised equipment for training AI models. As these solutions grow more sophisticated, the need for expansion and additional resources may further strain a startup’s budget. 

Areas with inconsistent internet connectivity could also affect AI performance. According to one report, internet use is 22.5 per cent lower in rural Southeast Asia than in urban areas, except Singapore and Brunei. Climate change impacts in Indonesia, the Philippines and Vietnam, especially, may also hinder broadband infrastructural investments. 

Biases and fairness

Startups must address biases within AI systems. This includes unfair decision-making based on gender, age or race. Failing to mitigate biases could hurt a startup’s reputation and lead to noncompliance. 

Biases may occur during data collection due to insufficient information capture. It might also happen when data gets fed to the models during training. Some regions have introduced new rules requiring companies to recheck information for fairness before continuing conditioning models. 

Funding and investment

Because AI is still developing, technology startups must secure funding to demonstrate the tools’ potential to stakeholders. The most effective approach is establishing clear AI initiatives with each project’s likely return on investment. Asian markets can seek government grants and venture capital for AI specialisations.

China is a prime example of this, having previously invested 23 per cent of US$912 billion in government venture capital funds to 1.4 million early-stage AI startups. The Chinese government issues much of this venture capital to firms with lower software development costs and those with signs of higher growth from the investment.

Integration and implementation

AI implementation may be difficult in existing systems and workflows, especially if teams are resistant or lack proper training. These factors can also put a startup at risk of scams. 

For example, AI models need access to sensitive data. If personal information gets into the wrong hands, businesses and their customers may be susceptible to scammers. Bad players may use AI tools to create convincing deepfakes of people or communications to collect money. Others may use fraudulent chatbots impersonating customer service representatives to steal credit card information.

Also Read: Navigating the trust labyrinth: My perspective on ethical AI marketing

According to a Deloitte report, only 33 per cent of employees have received generative AI training, and 35 per cent say they weren’t satisfied with their learning. A company must ensure a clear strategy for AI integrations and prepare its employees for the change. 

Tips for startups to overcome these challenges

Technology startups must keep up with evolving AI advancements even as they find their footing. Companies should concentrate their investments in talent acquisition, data management and computing infrastructure for maximum returns. 

Integrating AI into a company’s business plan should focus on concrete outcomes and revenue. Seeking investors with AI knowledge and pursuing federal grants and funding programs — including crowdfunding — is another way to garner capital, test the market and reduce risk. 

A successful technology startup is only as good as those working there. Therefore, finding the best talent with AI expertise and providing comprehensive training and professional development is essential.

Additional suggestions for overcoming the challenges of AI in a technology startup include:

  • Explore public data platforms and exchanges.
  • Enhance training data by modifying existing points and creating new, quality data from scratch.
  • Implement stringent data management and security measures.
  • Utilise cloud computing for adaptability and scalability.
  • Improve AI model efficiency for the most productive resource utilisation.
  • Integrate AI with smaller, more concentrated projects, such as resolving specific business-related issues.
  • Make improvements to AI tools according to feedback and results.
  • Encourage employee and stakeholder engagement during AI implementation.
  • Support employees with AI training.

It is equally important to address potential biases in AI technology. Startup owners might consider launching an ethics committee or advisory board to establish responsible AI development and utilisation. The committee will review AI projects, detect possible biases, and prioritise transparency to build trust and manage risks.

Embracing AI in the startup landscape

As AI advances, startups should find ways to adopt it in practice. Although the challenges are valid, AI can transform businesses for the better. Considering startups must build themselves from the ground up, embracing AI responsibly and gradually is a sure path to success.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post Is AI making it harder for tech startups to survive? appeared first on e27.

Posted on

Singapore’s AI ambitions face crucial test amid economic and talent pressures

Singapore’s push to lead in artificial intelligence faces mounting headwinds as global economic pressures and persistent talent shortages undercut momentum.

According to a new survey by global HR and payroll platform Deel, conducted with Milieu Insight, 81 per cent of Singaporean companies report negative impacts from global tariffs, with many forced into difficult workforce decisions such as wage freezes, reduced hiring, and retrenchments.

Also Read: Southeast Asia’s AI divide: SleekFlow report warns of widening gap

The findings—based on responses from 350 business leaders across SMEs and large enterprises—reveal a critical inflection point for the island nation’s digital transformation agenda. Despite AI’s promise to boost productivity and efficiency, adoption remains uneven and cautious across the market.

Global shocks temper AI optimism

Singaporean businesses find themselves squeezed between escalating operational costs due to tariffs and the imperative to invest in innovation. More than half (56 per cent) of respondents cite increased costs, with AI-forward companies feeling this pinch even more acutely (86 per cent).

Yet, the potential benefits of artificial intelligence remain compelling. Companies leveraging AI report tangible gains: 71 per cent cite improved productivity, 61 per cent report operational optimisation, and 50 per cent realise cost savings. Nearly a third (31 per cent) have accelerated AI and automation in response to global instability—an indication that AI is seen as a resilience tool in volatile times.

Talent bottlenecks slow AI deployment

Even as the benefits of AI become clearer, Singapore’s talent pipeline lags behind. A staggering 68 per cent of businesses are still in the early stages of AI adoption, with only 12 per cent of SMEs reaching intermediate levels, compared to 43 per cent of larger enterprises.

Talent shortages are the main culprit. Nearly half of the respondents say local AI expertise is insufficient, and high salary expectations, limited career growth, and skill mismatches further hinder recruitment.

As a stopgap, 62 per cent of firms are open to hiring from overseas, but only 20 per cent have budgets set aside to reskill their current workforce—a disconnect that could stall sustainable progress.

“Talent remains the single biggest barrier to scaling AI,” said Nick Catino, Global Head of Policy at Deel. “Cross-border hiring can fill gaps, but must be paired with effective knowledge transfer to uplift local teams”.

Government support recognised but underutilised

Singapore has laid out comprehensive strategies to foster AI, including the National AI Strategy (NAIS 2.0). However, awareness and engagement remain low.

While 92 per centof businesses see government support as vital—particularly in funding and upskilling—only 5 per cent are actively engaging with existing AI frameworks. A striking 95 per cent say they are unfamiliar or only mildly familiar with the governance framework.

Also Read: AI and automation in Southeast Asia: Which jobs are at risk and which will thrive?

This lack of engagement comes despite calls for stronger regulatory guardrails from 57 per cent of the respondents. The gap suggests that while the government’s intent is clear, execution and awareness-building efforts need urgent reinforcement.

Aligning talent, policy, and tech for a future-ready Singapore

As the AI race intensifies, Singapore must bridge its knowledge and talent gaps to sustain its leadership. Proactive engagement with policy frameworks, robust upskilling strategies, and targeted AI investments will be essential. Only through this alignment can the city-state realise the transformative potential of AI—turning today’s headwinds into tomorrow’s competitive edge.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

The picture was generated by ChatGPT.

The post Singapore’s AI ambitions face crucial test amid economic and talent pressures appeared first on e27.

Posted on

The architecture of bad deals: Moral hazard in modern business

One of the most overlooked reasons why businesses lose money — whether in outsourcing, sales, partnerships, or overseas investments — is not incompetence or bad luck, but a deeper structural issue: the principal–agent problem.

The principal–agent problem occurs whenever one party (the principal) depends on another party (the agent) to act on their behalf. In theory, both should want the same outcome. In reality, their incentives rarely match.

The agent often gets paid immediately. The principal only wins or loses over time.

So the agent pushes risky, unsuitable, or outright worthless products — with zero accountability.

This gap creates the perfect environment for moral hazard — a situation where the agent takes risks, exaggerates promises, or cuts corners because they don’t suffer the consequences. The reward is theirs. The downside is yours.

We see this everywhere:

  • A real estate broker earns a commission upfront, even if the project collapses later.
  • A sales consultant overpromises because they’re paid for closing, not delivering.
  • An overseas agent recommends a vendor they secretly have a side deal with.
  • A “financial advisor” pushes long-term products they barely understand but that give them the highest commission.
  • Recruiters sell you a candidate because their incentive is placement, not performance.

In each case, the agent gets their reward long before you experience the true outcome. And by the time you discover the risks, it’s too late.

When the agent’s payoff is front-loaded while the principal’s risk is long-term, misalignment becomes extreme.

Moral hazard doesn’t require malicious intent

Sometimes the agent simply doesn’t know what they’re selling, doesn’t understand the risks, or never has to live with the consequences.

The structure itself encourages overconfidence and under-disclosure.

The incentives make it rational for agents to behave this way — even if it harms the principal.

Also Read: AI’s biggest bottleneck isn’t intelligence but fragmentation: i10X co-founder

Why moral hazard produces predatory behaviour: Because the seller wins even if you lose, this is why so many business deals are filled with:

  • Inflated projections that exaggerate the upside
  • Minimise or hide risk
  • Aggressive persuasion
  • Zero accountability
  • Fake credibility (watches, cars, “success lifestyle”)
  • Attack or gaslight anyone who questions them

Because once they collect the fee, they disappear.

And when you combine:

Information asymmetry (they know more than you about this market)

Principal–agent problem (their goals differ from yours)

Moral hazard (they don’t suffer if they’re wrong)

You get a perfect recipe for:

  • Overconfidence
  • Deception
  • Exploitation
  • Bold promises without accountability

This explains why entire industries become magnetised toward unethical behaviour — simply because the system rewards the wrong things.

So how do you fix it?

You don’t fix it with trust. You fix it with structure.

Structuring business partnerships for greater accountability

You don’t fix the principal–agent problem by hoping for good behaviour. You fix it by engineering the incentives so that bad behaviour is punished, and good behaviour is rewarded.

The only proven way to reduce this moral hazard is to align incentives, share risk, and impose accountability.

Create shared “skin in the game”

If the agent benefits only when you benefit, incentives realign instantly.

Examples:

  • Profit-sharing instead of upfront fees
  • Milestone-based payments instead of full deposits
  • Escrow release tied to verified outcomes
  • Advisors who invest in the same assets they recommend
  • Consultants paid based on measurable deliverables

This transforms the relationship from: “Your risk, my reward” → “Our performance, shared reward.”

If the agent refuses performance-linked compensation, that’s a red flag.

Break the information asymmetry

The principal–agent problem amplifies when the agent knows 10x more than the principal.

You fix this by:

  • Third-party verification
  • Independent due diligence
  • Local experts auditing claims
  • Transparent documentation
  • Competitor comparison
  • Data access (not only brochures)

Use escrow and controlled payment structures

Most predatory deals survive because payment is front-loaded.

We solve this with:

  • Escrow accounts
  • Release-by-milestone payments
  • No commission until a quality check is passed
  • Split payments tied to measurable deliverables

When agents know they won’t get paid unless the job is real, inflated promises disappear.

Align incentives with long-term outcomes

The principal–agent problem is a timing problem:

  • The agent gets paid now.
  • The buyer suffers consequences later.

Solutions:

  • Tie fees to long-term performance
  • Lock advisors into accountability periods
  • Require warranties or after-sales responsibilities
  • Stagger commissions so payout matches the risk window

This discourages short-term “pump and dump” behaviour.

Increase transparency

Misalignment thrives in the dark.

Reduce it by:

  • Open-book reporting
  • Shared dashboard of project status
  • Mandatory disclosure of incentives
  • Conflict of interest declarations
  • Recording sales calls/documentation

When incentives are visible, bad actors can’t hide them.

Also Read: How to spot the signals that move the needle: A Founder’s guide to cutting through the clutter

Certifications, vetting, and competence checks

Unqualified agents cause as much harm as malicious ones.

Solutions:

  • Minimum knowledge requirements
  • Mandatory training on product risks
  • Verified local licensing
  • Background checks
  • Complaint history reviews

Many “sales agents” don’t even understand what they’re selling — eliminating incompetent agents protects the principal.

Separate advice from sales

This is the reform that transformed modern financial regulation.

To reduce misaligned incentives:

  • The advisor should not be the seller
  • The seller should not be the evaluator
  • The promoter should not structure the fee
  • Advice should be fee-based, not commission-based

When the same person advises you and sells to you, chances are that you will end up on the losing side of the trade.

Build feedback loops + consequences

Bad agents thrive because there is no downside for deception.

Fix this with:

  • Blacklisting bad vendors
  • Public reviews
  • Performance scoring
  • Contractual penalties
  • Mandatory refunds for negligence

Give the agent something to lose for bad behaviour and create a structure to encourage positive change.

Also Read: A Founder’s field guide on 10x talent

The danger isn’t just “bad people.” It’s bad incentives that reward bad behaviour.

When incentives change, behaviour changes.

And in global markets — especially cross-border investments, outsourcing, and vendor sourcing — solving the principal–agent problem is the difference between sustainable growth and expensive mistakes.

Because when the ecosystem is structured to protect the principal, corruption declines, quality improves, and everyone performs better.

In the end, moral hazard is not about morality — it’s about incentives.

In summary: Fix incentives → reduce moral hazard → improve markets → protect investors.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image credit: Canva

The post The architecture of bad deals: Moral hazard in modern business appeared first on e27.