Posted on

Cybersecurity and data governance in the boardroom: A strategic imperative for Asian boards

In today’s hyperconnected world, cybersecurity and data governance have become board-level imperatives. A single breach, data leak, or regulatory misstep can inflict not only financial loss but also reputational damage, legal penalties, and erosion of stakeholder trust. Yet, despite escalating threats, many boards in Asia still treat cybersecurity as a technical issue rather than a strategic risk requiring active oversight.

Boards that treat cybersecurity and data governance as strategic responsibilities safeguard enterprise value, build stakeholder confidence, and enable sustainable growth.

The rising stakes of cyber and data risks

Asia is a hotspot for cyber threats due to its large digital economies, rapid adoption of cloud and AI technologies, and cross-border data flows. Boards must consider risks that include:

  • Ransomware and cyberattacks: Disrupting operations, supply chains, and customer services.
  • Data privacy breaches: Regulatory fines under GDPR, PDPA, or local privacy laws.
  • Third-party vendor vulnerabilities: Supply chain attacks exposing sensitive information.
  • AI and algorithmic risks: Mismanaged models leading to bias, fraud, or operational errors.
  • Reputational exposure: Loss of customer trust can impact market position and valuation.

The frequency, complexity, and financial impact of cyber incidents are growing. According to recent studies, Asian organisations face a 40–50 per cent higher risk of cyberattacks than global averages, making board-level attention essential.

Also Read: Code, power, and chaos: The geopolitics of cybersecurity

Boards must shift from compliance to strategic oversight

Traditional approaches — approving IT budgets or receiving quarterly reports – are no longer sufficient. Boards must integrate cybersecurity and data governance into enterprise risk and strategy discussions:

  • Strategic risk lens: Treat cyber and data risks as core to enterprise risk management, not merely IT risk. Consider potential operational, regulatory, financial, and reputational impacts.
  • Continuous monitoring and reporting: Boards should receive real-time dashboards on threat levels, incident response readiness, and regulatory compliance. Lagging metrics are insufficient in a rapidly evolving threat landscape.
  • Scenario planning and stress tests: Boards should engage management in simulations of cyberattacks, data leaks, or AI system failures. These exercises reveal weaknesses and prepare leadership for high-stakes incidents.

Key questions boards should ask

To fulfil their oversight responsibilities, boards should challenge executives with strategic questions:

  • How are we securing critical infrastructure and sensitive data across the organisation?
  • What are the key third-party or supply chain vulnerabilities?
  • How frequently do we conduct penetration tests, audits, and incident simulations?
  • What is our incident response plan, and how quickly can it be executed?
  • Are cybersecurity and data governance KPIs embedded into executive performance evaluations?

These questions elevate cybersecurity from a technical discussion to a board-level governance concern.

Integrating cybersecurity into culture and talent strategy

Effective oversight requires more than policies; it requires embedding cyber awareness into organisational culture:

  • Executive accountability: CEOs and CIOs must be responsible for implementation, with boards reviewing outcomes.
  • Employee awareness: Continuous training reduces risk from human error and phishing attacks.
  • Talent capability: Boards should assess whether the organisation has sufficient cybersecurity expertise at all levels.
  • Cross-functional integration: Cyber and data governance should be connected with risk, compliance, and business strategy functions.

Culture is the often-overlooked defence layer — it is as important as technology.

Also Read: How cybersecurity crises are redefining corporate accountability

Board capabilities and education

Aspiring independent directors must demonstrate:

  • Cyber literacy to understand key threats, mitigation strategies, and emerging technologies.
  • Awareness of regulatory trends, including cross-border data flows and privacy compliance.
  • Capability to challenge management assumptions while remaining constructive.
  • Understanding of AI, cloud, and digital platforms as both opportunities and vulnerabilities.

Boards should periodically engage external advisors, conduct briefings, and participate in tabletop exercises to maintain readiness.

Conclusion: Cybersecurity and data governance as strategic imperatives

Cybersecurity and data governance are no longer IT issues — they are enterprise-wide, strategic imperatives. Boards that integrate these considerations into strategy, risk management, and culture:

  • Protect enterprise value from financial and reputational loss
  • Strengthen investor and stakeholder confidence
  • Enable responsible digital transformation
  • Ensure organisational resilience in an increasingly connected world

For Asian boards, the mandate is clear: cyber and data governance are now board responsibilities, not optional technical topics. Boards that lead here create both security and competitive advantage.

This article was first published on The Boardroom Edge.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image credit: Canva

The post Cybersecurity and data governance in the boardroom: A strategic imperative for Asian boards appeared first on e27.

Posted on

AI Pulse Exclusive: How CoBALT is designing AI that teams can actually trust

An interview with Stella Seohyeon Kim COO and Co-Founder of CoBALT, on building AI as operational infrastructure, earning user trust, and applying AI in real workflows, part of e27’s AI Pulse coverage.

In this interview, e27 speaks with Stella Seohyeon Kim, COO and Co-Founder of CoBALT, a company building AI-native systems that help organisations turn everyday interactions into tangible business opportunities. Through its flagship product REALIZER.ai, CoBALT operates at the intersection of sales, business development, and operations, offering a grounded perspective on how AI is being embedded into real workflows as trusted operational infrastructure rather than surface-level features.

This conversation sits within e27’s broader AI coverage, which examines how organisations across the region are building, deploying, and governing AI in practice.

Turning first meetings into real business opportunities

e27: Briefly describe what your organisation does, and where AI plays a meaningful role in your work or offering.

Stella: Cobalt operates REALIZER.ai, an AI-native assistant that turns the people you meet at work into real business opportunities.

Business developers and sales teams meet dozens, sometimes hundreds, of potential customers, partners, and investors through meetings, conferences, and industry events. REALIZER elevates those first encounters from simple contact exchanges into qualified opportunities.

After a meeting, a user can scan a business card, enter an email address, or leave a short voice note about the interaction. From there, Realizer quietly organises the contact, researches the person and company, evaluates the opportunity, and drafts the first follow-up message. The user simply reviews and sends it.

There is a golden window after meeting someone, roughly 48 hours. When meaningful touchpoints are created within that time, the chance of converting the relationship increases dramatically. Realizer is designed to help teams act within that window.

Making individual interactions organisational assets

e27: What is one concrete way AI is currently creating value within your organisation or for your users or customers?

Stella: The greatest value Realizer delivers is turning every individual interaction into a reliable organisational asset.

Instead of relying on personal intuition or fragmented experience, REALIZER enriches and verifies information about prospects, partners, and investors using consistent criteria. It applies a shared logic for evaluating opportunities and recommending next actions.

As a result, teams view opportunities through a common lens, improve pipeline predictability, and move faster without missing critical moments. On an individual level, AI supports not only labour-intensive tasks but also work that requires higher-level reasoning, helping people achieve real outcomes, not just efficiency.

An interview with Stella Seohyeon Kim COO and Co-Founder of CoBALT, on building AI as operational infrastructure, earning user trust, and applying AI in real workflows, part of e27’s AI Pulse coverage.

Defining how humans and AI collaborate

e27: What was a key decision or trade-off you had to make when adopting, building, or scaling AI?

Stella: The most difficult, and most important, challenge was defining how humans and AI collaborate.

For effective collaboration, people need to feel confident that they remain in control while still trusting AI-driven decisions. That requires redesigning processes and delivering an experience where AI works almost invisibly, flowing naturally, without users constantly noticing or managing it.

This is the first time in human history that we are working alongside non-human intelligence. There has been trial and error, but our guiding principle is clear. AI should not diminish human value, it should amplify it. Just as electricity became seamlessly embedded into daily life, AI should quietly integrate into workflows and elevate them.

Building trust while managing AI imperfections

e27: Looking back, what has worked better than expected, and what proved more challenging than anticipated?

Stella: Imagine hiring a new employee who executes tasks flawlessly without supervision. That would be ideal. But if you constantly need to double-check their work and clean up mistakes, they quickly become a liability.

AI, especially large language models, is a new kind of junior hire. Depending on how you instruct it, the output can range from excellent to disastrous. It never complains, can repeat tasks endlessly, but it can also hallucinate with complete confidence.

Designing instructions and systems that consistently lead to high-quality outcomes was far more delicate than expected. We believe trust is the foundation of human-AI collaboration, so we built Realizer to earn that trust. It evaluates information across more than 50 sources, applies dozens of validation criteria, and presents not only insights but also confidence levels.

What proved harder was keeping this disciplined AI mostly out of sight, allowing humans to feel effective without constantly confronting AI’s imperfections. AI makes mistakes, just like people do. Managing those failures without burdening users requires a careful balance. It’s challenging, but we believe this balance is what ultimately leads to long-term adoption and genuine affection for the product.

An interview with Stella Seohyeon Kim COO and Co-Founder of CoBALT, on building AI as operational infrastructure, earning user trust, and applying AI in real workflows, part of e27’s AI Pulse coverage.

AI requires new ways of working

e27: What is one lesson about applying AI in real-world settings that leaders or founders often underestimate?

Stella: AI is not a magic wand.

Leaders must recognise that adopting AI is not merely a technical upgrade, it is the introduction of a new way of working. No matter how advanced the model is, poorly designed instructions and workflows can make AI worse than useless.

If an organisation fails to adapt how it collaborates with AI, performance may actually decline rather than improve.

Starting small to earn trust

e27: Based on your experience, what is one practical recommendation you would give to organisations that are just starting to explore or scale AI?

Stella: Start small, at a single high-friction decision point.

Rather than pursuing large-scale digital transformation, apply AI to one area where people struggle most or repeatedly waste time. Prove real impact there first, then expand. When there is a clear owner and measurable outcome, AI earns trust and becomes embedded naturally within the organisation.

From AI features to operational infrastructure

e27: Over the next 12 months, how do you expect your organisation’s use of AI, or the role of AI in your industry, to evolve?

Stella: Over the next year, AI will move beyond task-level assistance and become core operational infrastructure.

Within Realizer, AI will increasingly reassess opportunities continuously, monitor signals across channels, and recommend next actions at the team level. Across industries, the competitive edge will shift from having AI features to building trusted, governable AI systems that organisations are willing to rely on in real operations.

An interview with Stella Seohyeon Kim COO and Co-Founder of CoBALT, on building AI as operational infrastructure, earning user trust, and applying AI in real workflows, part of e27’s AI Pulse coverage.

Why alignment matters more than speed

e27: Anything else you want to share with the audience?

Stella: The true value of AI is not in making individuals faster, it lies in making organisations more aligned and more decisive.

Working with startups as well as publicly listed Korean companies has made one thing clear. The winners are not the teams with the flashiest models, but those that design AI around trust, clarity, and execution. As AI becomes invisible infrastructure, what matters most is not how impressive it looks, but how deeply and thoughtfully it is integrated.

Stay ahead of how AI is actually being used

This conversation highlights a recurring theme in how AI is moving from experimentation to everyday use. Rather than chasing novelty, CoBALT’s approach centres on trust, alignment, and designing AI that fits naturally into how teams already work. From capturing fleeting first meetings to building shared organisational judgment, Stella Seohyeon Kim’s perspective underscores that the real challenge of AI adoption lies less in models and more in systems, workflows, and human confidence. As AI becomes quieter and more embedded, the organisations that succeed will be those that treat it as operational infrastructure, not a showcase feature.

For more interviews, analysis, and real-world perspectives on how organisations across the region are applying AI in practice, subscribe to our newsletter. You can also explore more AI stories here.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

The e27 team produced this article

We can share your story at e27 too! Engage the Southeast Asian tech ecosystem by bringing your story to the world. You can reach out to us here to get started.

Featured Image Credit: CoBALT

The post AI Pulse Exclusive: How CoBALT is designing AI that teams can actually trust appeared first on e27.

Posted on

The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis? — Part 2

As AI’s energy consumption surges, concerns over its environmental impact grow. However, AI also offers solutions — optimising data centre cooling, managing smart grids, and reducing industrial energy waste. This article explores how AI-driven efficiency can help counterbalance its own power demands, creating a path toward more sustainable energy use.

AI-driven efficiency: Mitigating the carbon toll

While AI’s energy consumption is undeniably large, AI technologies also offer powerful tools to cut energy waste and emissions across many industries. From cooling data centres to optimising factory lines and smart grids, AI-driven efficiency gains can act as a counterweight to AI’s own power use. In essence, there is an opportunity for a positive feedback loop: using AI to save energy even as we use energy to run AI.

Some notable examples of AI-enabled efficiency breakthroughs:

  • Data centre cooling optimisation: Google’s DeepMind cut data centre cooling energy by 40 per cent by predicting server loads and adjusting cooling in real time.
  • Next-gen cooling technologies: Advanced cooling solutions, such as direct-to-chip liquid cooling, have been shown to reduce server energy use by ~30 per cent , with liquid cooling now used in up to 45 per cent of new European facilities.
  • AI-managed micro-grids: In regions like Ohio and Texas, experimental micro-grids leverage AI to balance renewable energy with data centre power draw , cutting renewable curtailment by about 22 per cent.
  • Industrial and building energy management: AI applications have helped Toyota reduce energy consumption by 29 per cent on certain manufacturing processes and enabled commercial buildings (such as 45 Broadway in Manhattan) to achieve nearly 16 per cent HVAC energy savings through intelligent controls.
  • Building energy management: In commercial buildings, AI has shown impressive results in cutting power usage without sacrificing comfort. A notable case is 45 Broadway in Manhattan, where implementing an AI HVAC optimisation system led to a 15.8 per cent reduction in HVAC energy use. AI algorithms learned the building’s patterns and adjusted heating/cooling more intelligently. Similarly, AI-based controls for lighting and appliances can yield up to 30 per cent energy savings in buildings. Multiply these gains across millions of buildings and homes, and the potential energy savings are enormous.

These examples illustrate a hopeful counterpoint to AI’s energy appetite: the energy savings AI enables in other areas could, in theory, offset a significant portion of the energy AI consumes. Smarter grids, smarter buildings, smarter transportation (AI-optimised logistics, etc.) all contribute to lower overall demand.

A Shell analysis suggests AI applications could halve the carbon intensity of global energy by 2050 through such measures — coordinating renewables, improving efficiency, and innovating in materials (for example, using AI-driven design to create wind turbine blades that generate 40 per cent more power.

However, a critical question remains: Can AI’s energy-saving contributions catch up with its own growing consumption? This is the crux of the AI-energy paradox.

The AI-energy paradox: Do savings and consumption converge?

Right now, the net impact of AI on global energy is still an increase in demand. AI’s usage is growing so rapidly that efficiency gains, as valuable as they are, haven’t yet kept pace.

For instance, even as Google’s AI cut 40 per cent of cooling energy, the expansion of Google’s AI computing meant total energy use still rose. The near-term trend is divergence — AI driving more power use overall, despite localised savings.

Current figures bear this out. The US Department of Energy found that data centres (thanks largely to AI growth) consumed about 4.4 per cent of US electricity in 2023, and are on track to reach between 6.7 per cent and 12 per cent by 2028.

In other words, efficiency improvements are not projected to stop a doubling (or more) of data centres energy draw in the next five years.

A recent Electric Power Research Institute analysis likewise forecasts US data centres could hit nine per cent of national electricity use by 2030, up from ~four per cent today. Clearly, in the short run, AI’s footprint is outpacing the savings it enables elsewhere.

Also Read: A step-by-step guide to protecting your time and energy: The art of pre-qualification

Over the longer term, there is a possibility (not a guarantee) that the curves could converge. As AI matures, there’s intense research focus on efficiency: more efficient algorithms, specialised AI chips that deliver more performance per watt, better cooling, and so on. If each new generation of AI hardware is significantly more efficient, the growth in AI’s energy use could level off.

For example, tech firms are now prioritising energy efficiency over pure performance gains — a shift from the early “move fast” approach. Future AI models might be designed to be smaller or use smart techniques (like model sparsity or on-demand activation) that save energy.

Policymakers are also starting to push for convergence. The EU’s proposed AI Act will require large AI models to demonstrate 15 per cent energy efficiency improvements over previous generations — effectively slowing deployment of ultra-large models until they are more efficient (one reason rumours suggest GPT-5 might be delayed until such standards can be met). Governments may introduce carbon taxes or energy caps that make it economically unattractive to run wasteful AI systems, forcing innovation towards frugality.

So, will spending and savings converge? Optimistically, yes — but likely not until late this decade or beyond.

In a scenario where AI’s growth moderates and efficiency tech accelerates, we could see AI’s net impact plateau or even turn net-negative on emissions (especially if AI helps integrate huge amounts of renewables, as Shell’s scenario imagines.

But for the next 5-10 years, business leaders should plan for a world where AI means higher energy consumption and carbon output, and manage that reality accordingly.

The implication for corporates is twofold:

  • Invest aggressively in AI-driven efficiency projects within your own operations (to capture savings that can offset your AI usage).
  • Anticipate energy costs and capacity needs rising with AI, and incorporate that into everything from site selection (do your data centre/cloud regions have spare power capacity?) to vendor selection (choose partners with greener energy and efficient infrastructure).

In short, don’t assume the problem will solve itself. Proactive action is needed to bend the curve.

Accelerating the renewable transition to power AI

If AI is to spark a green energy revolution instead of exacerbating the crisis, a massive scale-up of clean energy is required. Renewables (solar, wind, hydro) need to grow in tandem with AI compute demand, and AI can be a catalyst to accelerate that growth. But it won’t happen automatically; it requires strategic investments and innovation.

On the plus side, AI is already helping get more out of renewables. We saw how AI can optimise wind and solar output (e.g. smarter inverters yielding 18 per cent more solar farm efficiency. AI can forecast weather and adjust operations to maximise renewable energy capture and reduce downtime.

For instance, autonomous AI-driven networks of electric vehicle (EV) chargers can collectively act as a 450 GWh battery for the grid, smoothing out renewable fluctuations by intelligently timing charging. AI is also being applied to breakthrough research — like using quantum computing and AI to design advanced materials for solar panels or wind turbines, potentially boosting their efficiency dramatically.

However, even optimistic efficiency gains won’t fully bridge the gap. The scale of new clean power needed is enormous.

A McKinsey study estimates that in Europe alone, an additional US$250-300 billion in grid infrastructure upgrades will be required by 2030 to handle 150 TWh of new AI-related electricity demand and connect enough renewables to supply it.

This includes new transmission lines, grid storage, and smarter distribution — essentially building a bigger, smarter grid to feed AI. Without such investment, renewable deployment could lag and AI would end up being powered by whatever is available (often coal or gas).

To put numbers on it: The world added about 300 GW of renewable capacity in 2022. If AI demand is rising by hundreds of TWh, we likely need to add hundreds more GW of renewables per year on top of current plans just to keep AI from increasing fossil fuel use.

Policymakers are starting to respond — the US Inflation Reduction Act, Europe’s Green Deal, China’s massive renewables build-out — all boost clean energy, which indirectly supports AI’s growth sustainably. But targeted actions may be needed, such as incentives for energy-intensive tech firms to directly finance renewable projects (as Microsoft is doing).

Also Read: Why the future of space and energy storage might be growing in a Thai hemp farm

One promising idea is direct clean power procurement for AI infrastructure. Instead of buying offsets or generic renewable credits, companies can invest in additional renewable generation that is tied to their data centres. Google has been a leader here, aiming for “24/7 carbon-free” energy by sourcing clean power in every hour and region that its servers operate. Other firms are now looking at similar models, which could drive significant new solar/wind development.

In summary, AI can accelerate the renewable transition — by necessity and by capability. It provides a strong business motive (big tech needs clean power, so they’ll fund it) and new tools (AI to optimise renewable performance). But it also raises the stakes: if renewables don’t scale fast enough, AI will end up entrenching fossil fuel use at exactly the wrong time for the climate.

For corporate leaders, this means aligning AI strategy with energy strategy. Embrace AI projects that further sustainability (smart grid, energy optimisation) and be cautious of AI expansions that outpace your access to green power. Seek partnerships in the energy sector — for example, co-develop a solar farm or wind park that can power your AI workloads. Those who proactively secure clean energy for AI will not only mitigate environmental impact but also hedge against future carbon regulations or fossil price volatility.

Geopolitical and economic crossroads

AI’s energy demands are now a factor on the geopolitical chessboard. Nations are racing to support their tech industries with reliable power (often in competition with climate goals), and energy dependencies are influencing tech policies. Three major theatres highlight this dynamic: the US-China tech competition, Europe’s regulatory balancing act, and emerging markets vying for data centre investments.

The US-China tech war’s energy dimension

China and the United States are both pouring billions into AI, and with that comes a hunger for energy. China has launched an “East Data, West Computing” initiative, investing an estimated US$75 billion to build huge data centre hubs in its inland provinces. Why inland? Because electricity is cheaper there — for example, coal-rich Inner Mongolia offers industrial power rates around US$0.03 per kWh, among the lowest in the world.

By situating AI data centre next to coal plants in the interior, China can fuel its AI growth at low cost (albeit with high emissions). This strategy effectively leverages China’s vast coal infrastructure to gain an edge in computing capacity.

Meanwhile, the US is responding with investments to support AI hotbeds at home. The Department of Energy recently announced US$2 billion for grid upgrades focused on “AI corridors” like Northern Virginia and Ohio. This includes improving transmission and reliability to ensure these regions (where many US cloud data centres cluster) can handle the increased load without blackouts or slowdowns. It’s essentially an infrastructure subsidy to keep US AI development on track and independent of energy bottlenecks.

There’s also a security aspect: both nations view leadership in AI as strategic, so ensuring the energy security of AI facilities is crucial. This could lead to more efforts like backup gas peaker plants for key data centres, or even dedicated small nuclear reactors, to immunise critical AI infrastructure from grid disruptions or fuel supply risks. In a hypothetical future standoff, a country that cannot power its AI systems reliably would be at a serious disadvantage.

Europe’s cautious approach

Europe, in contrast, is trying to chart a path that prioritises sustainability — but at the risk of dampening its AI momentum. The EU’s proposed regulations (like the AI Act) not only address ethics but also efficiency. As noted, the AI Act could effectively delay deployment of power-hungry models (e.g., next-gen GPT) until efficiency targets are met.

Also Read: How we generated 100+ leads on zero budget

Additionally, some European countries have taken hard stances on data centre growth due to energy concerns. Ireland’s moratorium on new Dublin-area data centres, for instance, was driven by fears that the national grid couldn’t meet both climate targets and a surge in data centre demand. That moratorium led companies to shift investments to places like Poland and Norway where power is more available.

The consequence is that Europe risks falling behind in AI infrastructure. While US and China race ahead with massive builds (regardless of carbon cost), Europe’s combination of slower cloud growth and higher energy prices could make it less attractive for AI development.

Some experts warn of a potential “digital drift” where European AI innovation migrates to more energy-abundant shores. On the other hand, Europe’s emphasis on efficiency and green power could pay off in the long run, yielding more sustainable operations that align with global climate imperatives (and avoid future regulatory penalties).

Global energy markets and AI investment

It’s not just the big three (US, China, EU). Around the world, countries are jockeying to attract data centre and AI investments — and energy is the key bargaining chip. For example, countries like Norway, Sweden, and Canada promote their abundant renewable energy (hydropower, wind) and cold climates (natural cooling) as ideal for sustainable AI data centres. Norway has lured several major projects by offering 100 per cent renewable power and low cooling costs, appealing to companies with net-zero commitments.

In Asia, Singapore has imposed a temporary freeze on new data centres due to energy and land constraints, then lifted it in favour of a selective policy favouring the most efficient, green designs. India and Indonesia are pitching themselves as emerging data centre hubs, but they’ll need to rapidly expand grid capacity (and ideally renewables) to deliver on those ambitions.

The energy crisis of 2022 (with spiking fuel prices) was a wake-up call for many: any country that wants to be an AI/cloud hub must ensure cheap, reliable power. This has geopolitical implications: nations rich in clean energy (like Iceland or Quebec with hydro, or Middle Eastern countries with solar + land for data centres) could play a bigger role in the digital economy by hosting energy-intensive AI computation. It’s a new twist on the resource competition of the past — instead of oil or minerals, it’s about attracting “computational industry” with the promise of low-cost electrons.

In summary, leaders need to be aware that AI isn’t happening in a vacuum — it’s intertwined with global energy and policy currents. Decisions about where to site AI operations, which markets to enter, or even which governments to partner with may hinge on energy availability and regulations.

Businesses at the cutting edge of AI should engage in policy discussions: for example, advocating for incentives for clean power or workable regulations that encourage efficiency without stifling innovation.

This is part two of a three-part series exploring AI’s energy impact. Read part one here

Part three of this series looks at the emerging solutions — tech and policy — that could put AI on a more sustainable path, and how companies can harness them.

This article was originally published here and co-authored by Xavier Greco, Founder and CEO of ENSSO.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

Image courtesy: DALL-E

The post The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis? — Part 2 appeared first on e27.

Posted on

Singapore’s next payments chapter will be written by AI and tokenised money

Singapore is doubling down on its ambitions to become Asia’s undisputed payments capital, as a new industry report paints the city-state as one of the world’s most advanced digital and cross-border payments hubs.

The Singapore FinTech Association (SFA), together with PwC Singapore, has launched “Payments’ State of Play 2026”, a sweeping review of how the island nation’s payments ecosystem has evolved over the past decade, and where it is headed next.

Also Read: Fintech rebound: Singapore bags US$1.04B, outpaces global peers

The report argues that Singapore’s rise has been driven by a rare combination of progressive regulation, strong foundational infrastructure, high consumer demand for seamless digital experiences, and close public-private collaboration. What began as basic payment rails has now matured into one of the most sophisticated payment markets globally.

Digital payments dominance and record funding momentum

One of the most striking findings is Singapore’s scale of digital adoption. More than 98 per cent of adults are banked, while real-time payments and digital wallets increasingly dominate everyday transactions.

Digital wallets alone are projected to process US$66 billion in online and point-of-sale transactions by 2027, underscoring how cashless behaviour has become deeply embedded in the country’s economy.

Investor confidence has also remained resilient. The report notes that the city-state’s payments sector raised over US$319 million in funding in the first nine months of 2025 — surpassing the combined fintech funding totals of Indonesia, Malaysia, the Philippines, Thailand, and Vietnam.

Real-time rails powering the ecosystem

Singapore’s domestic payments infrastructure continues to scale rapidly, led by systems such as PayNow and FAST.

FAST transaction volumes hit 500 million in 2024, representing a 31 per cent year-on-year increase, as real-time transfers become the default for consumers and businesses alike.

Card payments also grew strongly, with total value rising at a compound annual growth rate (CAGR) of 12.9 per cent from 2020 to 2024. E-money value expanded at a CAGR of 7.3 per cent over the same period, despite a slight decline in transaction volume.

E-money growth and the global wallet boom

Singapore’s digital payments market is expected to accelerate further. Total transaction value reached US$39.37 billion in 2023 and is forecast to climb to US$113.65 billion by 2030.

Also Read: Singapore’s SME fintechs face growth hurdles amid restricted API access

E-money transactions are projected to rise steadily to US$4.28 billion by 2028, supported by AI adoption, embedded finance innovation, stronger stablecoin regulation, and expanding cross-border payment networks.

This trajectory mirrors a wider global shift, with mobile wallet transactions forecast to surge to an estimated US$17 trillion by 2029.

Cross-border connectivity as a regional differentiator

Singapore is also positioning itself as a key settlement and connectivity hub for Asia. Initiatives such as Project Nexus, alongside PayNow linkages with Thailand and Malaysia, are strengthening the city-state’s leadership in cross-border real-time payments.

Total remittance volume reached US$8.05 billion in 2022 and is expected to grow to US$13.34 billion by 2032, representing a CAGR of 5.2 per cent.

Stablecoins, digital assets, and Singapore’s FX strength

The report highlights Singapore’s rising influence in digital assets, particularly stablecoins. The city-state now accounts for over 70 per cent of Southeast Asia’s non-USD stablecoin market pegged to the Singapore dollar, supported by the Monetary Authority of Singapore’s globally recognised regulatory framework.

Singapore is also reinforcing its status as a major foreign exchange hub. The country is now the world’s third-largest FX trading centre, with average daily trading volumes climbing to US$1.485 trillion in April 2025 —  a 60 per cent increase from April 2022.

Holly Fang, President of the Singapore FinTech Association, said, “Over the past decade, Singapore has developed one of the most advanced, resilient, and trusted payments ecosystems in the world.”

She added that progressive regulation and industry collaboration have
positioned Singapore as a leader in real-time and cross-border payments, while also confronting fraud and scams head-on.

PwC Singapore Partner Wong Wanyi echoed this view, noting, “Payments are evolving rapidly, led by technology and emerging realities, while also presenting new risks.”

Also Read: Singapore’s regulatory vision is shaping cross-border payments in Asia: Report

She emphasised that sustaining Singapore’s leadership will require strong risk management frameworks and regulatory clarity that encourage innovation while building trust.

The next wave: AI, embedded finance, and consumer protection

Looking ahead, the report identifies several trends shaping the next phase of payments innovation:

  • Embedded finance and super apps, integrating lending, investment, and payments into everyday platforms
  • AI-powered payments, enhancing fraud detection and optimising processing
  • Tokenised deposits and regulated stablecoins, expanding use cases in domestic and cross-border payments
  • Greater interoperability, driven by regional initiatives like Project Nexus
  • Stronger consumer protection, amid escalating scam risks

Fraud remains a pressing challenge. As of November 2025, scam-related losses in Singapore reached US$620 million, close to the US$812 million recorded across the whole of 2024 — underscoring the urgency for coordinated action across the ecosystem.

The post Singapore’s next payments chapter will be written by AI and tokenised money appeared first on e27.

Posted on

How research and startup partnerships are unlocking new opportunities for growth

Strategic collaborations between research institutions and startups are reshaping the innovation landscape, unlocking new opportunities for growth and delivering meaningful societal impact. These partnerships allow scientific and academic entities to access commercialisation channels and adopt more agile development approaches, while startups benefit from resources and industry expertise needed to scale their innovations effectively.

Many early-stage startups look for the first business partners among corporate players. Yet, challenges remain—according to a Boston Consulting Group survey, 45 per cent of corporations and 55 per cent of startups express dissatisfaction with their partnership experiences, highlighting a gap that science organisations are uniquely positioned to bridge—connecting groundbreaking research with viable business models.

To appreciate the scale of innovation in Southeast Asia, consider this: the region is home to 63 unicorns—companies valued at US$1 billion or more—with over 124,450 startups in total based there as of May 2025.

Around the world, innovation ecosystems are expanding rapidly, with millions of new startups launching annually across regions in North America, Europe, and Asia. Despite this growth, the disconnect between startups and research organisations remains a common obstacle, and the tangible benefits to businesses remain modest. All stakeholders within the innovation ecosystem stand to gain by strengthening these partnerships to better fulfil their promise for society and the economy.

Below are three key benefits to explore.

Enhancing research and development (R&D)

For startups looking to strengthen their R&D efforts by partnering with scientific institutions, there are three key areas to focus on: aligning innovation goals at the project level, establishing clear and open communication channels, and setting precise collaboration expectations within agreements.

Getting everyone aligned on innovation goals at the project level is absolutely crucial. In my experience mentoring startups, many partnerships start with broad, high-level objectives but don’t drill down into specific outcomes for each project. The most successful collaborations are those that sync goals not just strategically, but also at the day-to-day operational level. Using digital tools and collaborative platforms can make this much easier, helping teams coordinate in real time and maintain shared visibility.

Also Read: New research report: The nexus between elite university education and startup funding

Effective communication forms the backbone of any successful partnership, yet transparency often falls short. Issues such as siloed information systems and conflicting priorities can quickly lead to misaligned expectations and wasted resources.

To prevent this, partners should prioritise full visibility into project progress, ensuring that everyone involved has access to accurate, detailed updates—whether by project phase, team, or milestone. Centralising collaboration workflows and clearly understanding associated costs further build trust and accountability.

Equally important is tailoring incentives specifically to joint efforts. Too frequently, research institutions and startups focus on broad research milestones instead of concrete, shared deliverables. This misalignment can cause partners to pursue individual goals rather than common objectives, resulting in resource imbalances where some areas are overstretched while others remain underutilised. Clear, outcome-focused incentives help maintain commitment to the partnership’s overall success.

The Natural Resources Institute Finland (Luke) offers an example of a European research organisation focused on sustainable development through renewable natural resources. Luke conducts extensive research and development across forestry and bioeconomy, supporting both national and international projects.

It provides access to advanced research infrastructures such as greenhouses, research fields, and laboratories, enabling high-quality experimental work. Luke also coordinates the European research infrastructure AnaEE (Analysis and Experimentation on Ecosystems), fostering collaboration and knowledge sharing across countries. Through its involvement in numerous partnerships, Luke plays a key role in turning scientific insights into practical solutions that promote sustainability and well-being.

Fast-tracking commercialisation

Accelerating commercialisation is often the missing piece when startups and research institutions join forces. While both sides excel at innovation, the actual process of getting new ideas to market can get lost in the shuffle. By working together more closely—sharing resources, knowledge, and a unified vision—the journey from discovery to product becomes more efficient and streamlined. This collaboration helps prevent common setbacks such as conflicting priorities, wasted efforts, and delays that can hinder promising technologies.

A concrete example of such effective collaboration is Turion Labs, which recently opened in Singapore as the region’s first comprehensive biotech innovation platform. This joint venture, supported by Korea’s S&S LAB and Indonesia’s Future Lestari, offers modular lab spaces, contract research services, and regulatory assistance within a unified framework.

Turion Labs aims to connect promising scientific research with practical paths to commercialisation. It supports startups and biomedical companies by providing access to advanced laboratory facilities alongside Korean research expertise and Southeast Asian markets. This initiative reflects the growing trend in Southeast Asia to develop collaborative innovation centers that bring together research and industry to help advance biotech development in the region.

Also Read: Nagoya University: Asia’s extensive network of innovation, research, and education

What makes these partnerships work is flexibility. The most successful collaborations aren’t rigid—they adapt to the needs of each project and each team. Startups and research institutions that prioritise both innovation and business efficiency find ways to share risk and align goals, while keeping lines of communication open. This approach is especially important as startups play an ever-larger role in commercialising high-impact innovations.

Uniting diverse talents

Navigating partnerships between science organisations and startups isn’t just about having the latest tech at your fingertips—it’s about bringing together the right people and perspectives. Technology can certainly make collaboration easier, but it’s not a cure-all. The real magic happens when the deep technical know-how of researchers meets the entrepreneurial drive of startup founders, creating space for meaningful innovation.

Still, even with all the collaboration tools available today, many partnerships fall short of their potential. Two issues tend to crop up again and again. First, organisations often jump into new systems without rethinking how they actually work together—like installing state-of-the-art software but sticking to old, inefficient habits. Second, when project goals aren’t clear and data isn’t aligned, teams can end up working at cross purposes, slowing down the move from idea to market.

Good management can make all the difference here. The most effective collaborations bring together cross-functional teams—researchers, entrepreneurs, and other key players—who regularly check in on progress and keep everyone focused on shared milestones. Setting clear, measurable targets keeps things on track and helps spot issues early.

Compelling examples of collaboration between research labs and startups can be seen at the University of Eastern Finland, where joint efforts have led to innovative photonics applications for consumer electronics.

Similarly, the National University of Singapore has partnered with startups through a dedicated program focused on flexible electronics and hybrid systems, driving the development of advanced consumer electronics technologies. These partnerships highlight how academic institutions and startups are working together to push the boundaries of innovation in the consumer electronics sector.

Also Read: Bridging the digital divide: Addressing Malaysia’s skills gap

By combining academic expertise with startup agility, these collaborations have rapidly advanced from lab prototypes to market-ready products.

Starting point

One effective way to kick off collaborations between startups and research institutions is by gaining a thorough, project-level understanding of the partnership landscape. Once that foundation is in place, partners can use a collaboration health map to spot inefficiencies and opportunities at various stages—whether it’s during prototype testing or preparing for market launch.

This kind of tool helps leaders identify the root causes behind common challenges such as misaligned goals or wasted resources. With those insights, they can roll out targeted actions that address the real problems, rather than treating surface symptoms.  Moreover, this approach helps ensure that improvements are sustainable and don’t fade over time.

By adopting these strategies, startups and science organisations can work more smoothly together and unlock greater value for everyone involved. Of course, the exact approach will vary depending on each partnership’s goals and setup. But no matter the details, taking a proactive stance on managing collaboration can lead to smarter decisions and stronger, more rewarding partnerships.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post How research and startup partnerships are unlocking new opportunities for growth appeared first on e27.

Posted on

Why visibility in the AI era is a design problem, not a discipline one

Consistency has long been framed as a discipline problem. If you want to stay visible, the advice goes, you simply need to post more, work harder, and show up daily — even when you don’t feel like it.

That framing no longer holds in the AI era.

What we are seeing instead is a shift: Consistency is becoming a systems and design problem, not a willpower one. And the founders who understand this early are the ones building leverage without burning out.

From “working harder” to “designing fewer steps”

I wrote the book The Lazy Person’s Guide to Success, built around a simple idea: If a task takes 100 steps, the real work is figuring out how to reduce it to 10.

That logic still applies today — only now, AI accelerates it dramatically.

Before Seraphina AI, efficiency came from project management tools, SOPs, and documentation. I still use Asana extensively for this reason: It structures information, preserves institutional memory, and makes work retrievable.

What AI changes is not organisation, but execution velocity.

Instead of asking people to remember complex workflows, AI can now guide them through processes in real time. The system doesn’t just store instructions; it actively assists. That distinction matters.

Also Read: Singapore’s AI ambitions face crucial test amid economic and talent pressures

AI as teammate, not replacement

The most effective founders are not using AI as a shortcut for thinking. They are using it as a thinking partner.

I see Seraphina AI as a digital twin or personal assistant — a teammate that is always available, never fatigued, and able to respond on demand. It augments human judgment rather than replaces it.

This is especially evident in content creation.

The bottleneck is no longer production capacity. It is attention and articulation.

Why most people believe they “don’t have content”

When people say they don’t know what to write, they are usually confusing content with output.

In reality:

  • conversations,
  • reactions,
  • opinions formed while reading,
  • reflections shared with peers,

are already content — just undocumented.

AI closes this gap by lowering the cost of capture.

Voice notes, short reflections, or informal messages can be transcribed, structured, and adapted into written formats without losing the original voice. The authenticity remains because the source material is human. AI simply handles transformation and distribution.

Micro habits outperform motivation

The most sustainable form of consistency comes from micro habits, not grand commitments.

A simple example:
When an idea arises, record it immediately — without editing, formatting, or judging its value.

That single habit:

  • reduces friction,
  • bypasses perfectionism,
  • and creates a reliable input stream for AI-assisted processing.

Over time, journaling becomes blogging. Blogging becomes dialogue. Dialogue becomes visibility.

The system compounds quietly.

Also Read: Is AI making it harder for tech startups to survive?

Visibility as choice — and requirement

Visibility today is optional only in theory.

You can choose to remain invisible and still be competent. But if leverage, reach, or influence matter, visibility becomes a functional requirement.

Importantly, visibility does not demand virality. It demands continuity.

Not every idea will resonate with everyone. But resonance does not scale linearly — it clusters. And clusters form communities.

The real blocker is perfection, not fear

In practice, the primary inhibitors of consistency are:

  • overthinking value,
  • waiting for “better” ideas,
  • and mistaking polish for usefulness.

In reality, something does not need to be universally valuable to matter. It only needs to be relevant to someone.

Consistency builds familiarity. Familiarity builds trust.

The skill that matters most as AI does more

As AI expands its capabilities, the differentiator is no longer speed or output.

It is communication.

The ability to articulate thinking, share perspective, and remain present in public discourse is becoming the defining human advantage.

In that sense, consistency is not about effort. It is about design.

And “lazy” consistency — done correctly — is not a lack of ambition, but a strategic choice to let systems do what systems do best, so humans can focus on what only humans can do.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post Why visibility in the AI era is a design problem, not a discipline one appeared first on e27.

Posted on

The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis? — Part 1

Artificial intelligence (AI) is expanding at breakneck speed, presenting a paradox for global energy systems. On one hand, AI-driven innovations promise efficiency gains in renewable energy management and smarter grids. On the other, the surging power demands of AI threaten to strain electricity infrastructure and increase reliance on fossil fuels.

Current projections indicate data centres — the digital fortresses powering AI — could consume over 1,000 TWh of electricity by 2026, roughly double their 2022 usage. (For perspective, that’s comparable to Japan’s annual power consumption, or about 90 million US homes.)

In the European Union alone, data centre energy use is forecast to reach 150 TWh by 2026, ~ four per cent of EU demand. Gartner even predicts that 40 per cent of existing AI data centres will hit power capacity limits by 2027, underscoring the urgent infrastructure challenge.

This surge places immense pressure on power grids. Cutting-edge AI models require enormous energy: Training a single large language model (LLM) like OpenAI’s GPT series can devour tens of gigawatt-hours of electricity . Some hyper-scale AI data centres already draw 30-100 megawatts each, and future facilities may exceed 1,000 MW (1 gigawatt) — about the output of a large power plant .

One industry analysis notes tech giants are pursuing “gigawatt-scale” data centre campuses to support AI workloads . By 2030, Microsoft and OpenAI’s planned “Stargate” supercomputer could require an astonishing five GW of power.

In response, tech companies are exploring diverse energy strategies. Google, for instance, is investing in advanced nuclear power: it signed a deal to purchase energy from small modular reactors (SMRs), aiming to add 500 MW of carbon-free power by 2030.

Microsoft is turning to nuclear with the Three Mile Island nuclear power plant deal, Amazon, and Meta are turning to conventional power plants — in some regions, new natural gas-fired generators — to guarantee reliable juice for AI data centres, a strategy supported by utilities. In Wisconsin, regulators approved a US$2 billion gas plant deemed “critical” for Microsoft’s new AI hub.

These moves underline a hard truth: renewables alone can’t yet meet AI’s ravenous base-load demand, prompting a dual-track energy race between carbon-free solutions and fossil fuels.

This brings up pressing questions for business leaders:

  • Will AI ultimately drive sustainability gains or an energy crisis?
  • How are regional disparities and geopolitics shaping AI’s energy footprint?
  • What technological breakthroughs could enable sustainable AI growth?
  • And how should corporate strategy adjust to balance AI’s benefits against its energy and carbon costs?

This three-part guide examines the forces at play — from data centre trends and energy innovations to policy and geopolitical factors — to help corporate decision-makers navigate AI’s energy revolution.

The goal: understand the macro and geopolitical impacts of AI’s energy consumption, and chart a course that leverages AI’s power responsibly and sustainably.

The energy cost of AI: Hard truths and hidden opportunities

Global data centre electricity consumption reached an estimated 460 TWh in 2022, with AI and cryptocurrency operations accounting for roughly 14 per cent of that load, according to the International Energy Agency (IEA).

Now AI is pushing those numbers dramatically higher. Projections show data centres worldwide could consume over 1,000 TWh by 2026 — roughly doubling in just four years. By 2030, some forecasts see a further 160 per cent increase in data centre power demand driven by AI.

Also Read: Eco-investing: Driving change through climate technology and strategic finance

This growth is concentrated in key AI hubs and “cloud clusters” with serious consequences for local grids:

  • In Northern Virginia’s famed “Data Centre Alley,” the massive concentration of servers has led to power quality issues. The region now experiences voltage distortions four times higher than the US average, raising the risk of appliance damage and even fires for surrounding communities. Utilities warn that traditional grid infrastructure is straining to keep up with the load.
  • In central Ohio, data centre capacity has quadrupled since 2023, consuming so much electricity that utility AEP had to halt new data centre connections, despite a 30 GW queue of projects waiting to plug in. Simply put, the grid can’t be expanded fast enough to accommodate the sudden surge in demand.
  • Ireland faces a similar crunch — by 2026, data centres are projected to gobble up 32 per cent of Ireland’s electricity. Dublin’s metro grid is so stressed that the government imposed a moratorium on new data centres in the area, shifting over US$4 billion in planned investments to other countries.

The energy intensity of AI is a key reason demand is outpacing capacity. A few eye-opening facts illustrate the scale:

  • Training a single large AI model can consume enormous amounts of electricity. For example, training ChatGPT/GPT-3 (with 175 billion parameters) is estimated to use on the order of 1-1.3 GWh (gigawatt-hours) of energy — roughly the yearly electricity usage of over 1,000 US homes. And that’s for one training run. Newer models like GPT-4 are even more power-hungry — estimates suggest on the order of 50-60 GWh for a full training cycle, which would be enough to power ~4,500 homes for a year (and emits tens of thousands of tons of CO₂). In other words, one large AI model’s training = years of household electricity.
  • Running AI models (inference) is also energy intensive. AI queries consume about 10× more electricity than a typical Google search. Every time you ask ChatGPT a question, a network of GPUs fires up, drawing far more power than a standard web search. Multiply this by millions of queries, and the energy adds up fast. Microsoft and Amazon have responded by securing huge dedicated power supplies for their cloud AI operations — on the order of 500 MW to 1,000 MW per data centre campus — to ensure they can handle the surging demand. For perspective, a single 1,000 MW data centre campus could consume as much power as 750,000 homes.
  • The sheer consumption of top tech companies is staggering. In 2023, Microsoft and Google each used ~24 TWh of electricity — more power than entire countries like Iceland, Jordan, or Ghana consume in a year. This puts their usage above that of over 100 nations. While these firms have aggressive renewable energy programs, the scale of their energy draw highlights how big the AI computation boom has become.
  • The cloud giants are investing heavily to keep this sustainable. Microsoft recently announced a US$10+ billion deal with Brookfield to develop 10.5 GW of new solar and wind farms by 2030 — an unprecedented corporate clean power purchase aimed squarely at running its AI and cloud data centres on carbon-free energy. Amazon and Google are similarly pouring funds into renewables and even experimental technologies (like advanced geothermal and batteries) to offset their growing AI footprint.

Despite these efforts, power constraints are emerging as a growth limiter for AI. Industry analysts warn that in the next few years, many data centre operators (especially those not backed by big tech) may find it difficult or prohibitively expensive to get the electricity they need.

Gartner projects that by 2027, 4 in 10 AI data centres worldwide could hit their power capacity ceiling, meaning their expansion will be stalled by energy shortages. For enterprises, this could translate to slower cloud rollouts or higher costs as energy prices rise.

However, within this hard truth lies a hidden opportunity — AI itself can help solve the energy challenge. As we’ll explore, the same technology driving up consumption can also drive greater efficiency and new solutions, if wielded wisely.

Also Read: The key to tackling climate change: Electrify shipping

Comparing AI models: Power hunger from GPT to KNN

Not all AI is equally power-hungry. There is a vast gap in energy consumption between large, state-of-the-art AI models and more traditional algorithms. Understanding this spread can help leaders choose the right AI tools for the job — balancing capability and cost. The table below compares examples of AI models:

Table: Energy requirements for training various AI models range over orders of magnitude. Cutting-edge deep learning models (top rows) consume enormously more energy than smaller neural nets or classical machine learning methods (bottom rows). Choosing a right-sized model can avoid wasting power.

As the table shows, today’s largest AI models (like GPT-3/4) dwarf earlier AI in power needs. Training GPT-4 can use about 50,000× more energy than training a typical convolutional neural network (CNN) like ResNet-50 used for image recognition.

And an old-school algorithm like k-nearest neighbors (KNN) or an ARIMA forecast model might use a million-times less energy — essentially negligible in comparison.

This doesn’t mean companies should avoid large AI models altogether; rather, it underscores the importance of right-sizing AI to the task. You don’t always need a billion-parameter model if a simpler one works — and the energy (and cost) savings from a leaner approach can be huge.

Key takeaway: AI’s energy footprint isn’t uniform. Generative AI and other complex models can be incredible but come with extreme energy costs.

Business leaders should evaluate whether a smaller, more efficient model could meet their needs. In many cases, optimized or “distilled” models, or running AI at the network edge, can deliver acceptable performance while using a fraction of the power. This efficiency-centric approach to AI adoption will become increasingly vital as energy pressures mount.

Fossil fuel lock-in vs a nuclear renaissance

The tug-of-war between AI’s energy demand and clean energy supply is pushing companies down two very different paths. On one side, some firms and regions are doubling down on fossil fuels to keep the lights on for AI. On the other, there’s a growing movement toward a nuclear revival (along with renewables) to power AI sustainably.

Also Read: What does Trump mean for SEA climate scene?

On the fossil fuel front, oil and gas producers see AI’s rise as a new source of demand for hydrocarbons. BP’s CEO Murray Auchincloss, for example, predicts AI’s infrastructure build-out could drive an extra 3-5 million barrels per day of oil demand growth through the 2030s, as data centres and associated supply chains consume more energy (fuel for generators, diesel for construction, etc.). Likewise, Shell’s latest Energy Security Scenarios project natural gas demand reaching 4,640 billion cubic meters annually by 2040, partly to fuel backup generators for data centres and provide grid stability in an AI-enabled economy.

These trends raise concerns that AI could inadvertently lock in a new wave of fossil fuel dependence right when the world is trying to decarbonise. For instance, in the US, some utilities are proposing 20+ GW of new gas-fired power plants by 2040largely to meet data centre growth.

This runs directly against climate goals — building gas infrastructure that could last 40-50 years to serve what might be a short-term spike in AI-related demand.

Conversely, a potential “nuclear renaissance” is being driven by AI’s 24/7 power needs and corporate clean energy pledges. Nuclear power offers steady, carbon-free electricity that is highly appealing for always-on AI workloads. We’re seeing concrete steps in this direction:

  • Microsoft is investing US$1.6 billion to help reopen the dormant Three Mile Island nuclear plant in Pennsylvania, aiming to secure 24/7 carbon-free power for its AI data centres by 2028. This would repurpose an existing nuclear reactor to directly feed Microsoft’s cloud operations — a bold bet on nuclear as a reliable green energy source for AI.
  • Amazon and Google have each committed at least US$500 million in financing to startup companies developing small modular reactors (SMRs). Their goal is to have about 5 GW of new nuclear capacity from SMRs online by the mid-2030s. Google’s agreement with Kairos Power, for instance, targets the first SMR operational by 2030. If successful, these would be game-changers: modular reactors could be built near data centres to provide dedicated clean power.
  • In Europe, policymakers are increasingly viewing nuclear as essential for meeting AI’s power demands. The EU projects that nuclear-powered data centres (where data centres are co-located with nuclear plants or dedicated reactors) could supply 15-25 per cent of the new electricity needed for AI and digital growth through 2030. France and the UK have floated incentives for data centre operators to hook into existing nuclear plants, while countries like Romania and Estonia are partnering on SMR deployment with an eye toward tech sector needs.

The contrast is striking: Will the AI era deepen our fossil fuel dependence or accelerate the shift to alternative energy?

In practice, both are happening — but the balance could tip one way or the other based on economics and policy. Natural gas plants currently often win on cost and speed (a gas turbine can be built faster than a nuclear plant and is a proven solution to instantly boost capacity).

Indeed, “the only concrete plans I’m seeing are natural gas plants,” notes one energy consultant about data centre expansions. Yet, as carbon costs rise and modular nuclear tech matures, nuclear and renewables could prove the more attractive long-term play.

For corporate leaders, this means energy strategy is becoming inseparable from AI strategy. Companies may need to directly invest in energy projects (like Microsoft’s and Google’s deals) to ensure their AI ambitions have a viable power supply. Those that succeed in securing reliable, clean energy will not only meet sustainability goals but also gain an operational advantage (avoiding the risk of power constraints slowing their AI deployments).

This is part one of a three-part series exploring AI’s energy impact.

Part two of this series examines how AI can enhance energy efficiency and optimise grid management to address this challenge.

This article was originally published here and co-authored by Xavier Greco, Founder and CEO of ENSSO.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

Image courtesy: DALL-E

The post The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis? — Part 1 appeared first on e27.

Posted on

Building smart: A tech founder’s guide to the semiconductor supply chain revolution

The digital era is underpinned by a technology so small it is nearly invisible: the semiconductor. These tiny chips power the devices and systems that define modern life, from smartphones and electric vehicles to AI servers and medical imaging equipment.

As the global demand for semiconductors grows, the race to build resilient, agile, and forward-looking supply chains has never been more critical. For tech founders, especially in Southeast Asia (SEA), understanding this ecosystem is not just strategic—it is existential.

According to Source of Asia, SEA has carved a significant niche in the global semiconductor value chain. While front-end fabrication remains dominated by Taiwan and South Korea, the region has emerged as a vital centre for back-end processes: Assembly, Testing, and Packaging (ATP). These steps are essential to the chip lifecycle and offer enormous value for tech companies seeking reliable, cost-efficient solutions.

Countries such as Malaysia and Vietnam are rapidly becoming semiconductor hotspots. This is driven by low operational costs, supportive government policies, and modern infrastructure. These advantages, coupled with a skilled workforce, have made the region attractive to multinationals and startups alike.

As industries from automotive to telecommunications deepen their reliance on semiconductors, SEA’s role in maintaining global supply chain stability continues to grow. This makes it an ideal launchpad for startups aiming to scale amid geopolitical flux and accelerating digital transformation.

Also Read: GridCARE raises US$13.5M led by Xora to fuel AI’s energy needs

Navigating the semiconductor age demands more than just sourcing components. It requires forming the right strategic partnerships—those that bring not only capital but also technical expertise, global reach, and shared vision.

Infineon Technologies exemplifies such a partner. As a global semiconductor leader, Infineon is committed to driving decarbonisation and digitalisation through power systems and IoT solutions. Their products support everything from clean mobility to smart energy systems. With over 58,000 employees across more than 100 countries, Infineon is not just delivering chips; they are engineering a better tomorrow.

Partnerships like these are crucial for tech founders building hardware or AI-enabled platforms. Having access to high-quality semiconductor technologies, paired with expertise in sustainability and systems integration, can provide a competitive edge in both product performance and market perception.

The capital conduit: Investing in innovation

While tech is the engine, capital is the fuel. Vertex Ventures Southeast Asia and India (VVSEAI) has long recognised this dynamic. The fund has helped build companies such as Grab and PatSnap by not just writing cheques, but also providing strategic counsel, talent access, and introductions to customers and partners across the globe.

For tech founders in the semiconductor-adjacent space—whether in manufacturing, logistics, or AI—VVSEAI offers a unique combination of regional insight and global connectivity. With a presence in every major innovation hub through the Vertex Global Network, their teams share learnings across borders to help startups scale faster and smarter.

As chips grow more complex and demand for efficiency spikes, AI becomes indispensable in semiconductor operations. Innowave Tech is pioneering this shift. The company’s industrial AI solutions address challenges across predictive maintenance, quality assurance, and process automation. By replicating human judgment through edge AI and deep learning, Innowave helps manufacturers streamline operations and reduce downtime.

Also Read: Singapore’s AI ambitions face crucial test amid economic and talent pressures

One of Innowave’s most powerful contributions is in supply chain optimisation. By digitising material flows and applying analytics to forecasting and logistics, they create agile networks that respond swiftly to market changes—a capability that has become mission-critical in today’s unpredictable geopolitical climate.

Future-proofing through knowledge

Understanding the intricacies of semiconductor supply chains is no longer the domain of engineers and operations managers alone. Founders must grasp the broader implications—from sustainability and digital twin adoption to geopolitical risk and capital flow.

This is why the panel discussion “Building in the Semiconductor Age: What Tech Founders Need to Know About Supply Chains, Partnerships, and Strategic Positioning” is unmissable.

Join industry leaders Teong Wei Tan (Infineon), Chan Yip Pang (Vertex Ventures), and Jinsong Xu (Innowave Tech) as they decode the future of semiconductors and what it means for entrepreneurs.

📅 Echelon Singapore 2025
📍 Suntec Singapore
🗓 June 10–11
🕥 Panel: June 11, 10:30 AM – 11:20 AM at Forge Stage

Secure your seat now and future-proof your startup for the semiconductor-powered decade ahead.

The post Building smart: A tech founder’s guide to the semiconductor supply chain revolution appeared first on e27.

Posted on

Is AI making it harder for tech startups to survive?

Imagine a Singaporean biotechnology startup leveraging artificial intelligence (AI) in diagnostic solutions to determine the most effective cancer treatments for more rapid recovery. Meanwhile, a small agritech company might deploy AI-powered drones to enhance irrigation, pest management and crop health monitoring in rural India. 

AI innovations have rapidly become a non-negotiable driver of success in technology startups, particularly across Asia. Yet, despite these innovations’ ability to streamline functions, boost invention and personalise customer experiences, technology startups face several challenges that can hinder achievement.

Challenges faced by tech startups in the AI age

Despite AI solutions having the power to transform technology startups, integrating them isn’t always straightforward. These are some of the greatest integration difficulties in startup culture.  

Talent acquisition and up-skilling

The skills required for AI-influenced jobs change 25 per cent faster than jobs less impacted by AI, meaning workers must continuously up-skill to stay relevant. Compensation for positions relying on AI expertise also tends to be 25 per cent higher, incentivising professional development and highlighting the importance of AI to companies.

As it stands, talent availability is lacking. Saikat Banerjee — a leader at Bain & Company’s AI, Solutions, and Insights firm — says there will be 1.5 to 2 times more AI-related job openings than there are professionals to fill them by 2027. 

According to an MIT Sloan study, 85 per cent of entrepreneurs agree they critically need an AI strategy, whether to seek new opportunities, encourage groundbreaking product development or gain deeper insight into the customer journey. 

Data collection and governance 

Because startup companies are in their infancy, they do not always have relevant data points to train AI models. Data quality and diversity are also crucial. Otherwise, inputs may result in inaccuracies, biases and inadequate predictions, with serious consequences in health care or financial settings. 

Data privacy regulations are on the rise throughout Asia. For instance, Korea’s Personal Information Privacy Commission (PIPC) has issued rules allowing consumers to ask about AI decision-making, such as how it makes certain hiring decisions. Hong Kong also encourages responsible AI use in businesses by promoting fairness and transparency.

Also Read: How Hasan Venture Capital uses AI to build an ethically grounded investment future

Infrastructure and computing power

Technology startups must contend with the high costs of cloud computing and specialised equipment for training AI models. As these solutions grow more sophisticated, the need for expansion and additional resources may further strain a startup’s budget. 

Areas with inconsistent internet connectivity could also affect AI performance. According to one report, internet use is 22.5 per cent lower in rural Southeast Asia than in urban areas, except Singapore and Brunei. Climate change impacts in Indonesia, the Philippines and Vietnam, especially, may also hinder broadband infrastructural investments. 

Biases and fairness

Startups must address biases within AI systems. This includes unfair decision-making based on gender, age or race. Failing to mitigate biases could hurt a startup’s reputation and lead to noncompliance. 

Biases may occur during data collection due to insufficient information capture. It might also happen when data gets fed to the models during training. Some regions have introduced new rules requiring companies to recheck information for fairness before continuing conditioning models. 

Funding and investment

Because AI is still developing, technology startups must secure funding to demonstrate the tools’ potential to stakeholders. The most effective approach is establishing clear AI initiatives with each project’s likely return on investment. Asian markets can seek government grants and venture capital for AI specialisations.

China is a prime example of this, having previously invested 23 per cent of US$912 billion in government venture capital funds to 1.4 million early-stage AI startups. The Chinese government issues much of this venture capital to firms with lower software development costs and those with signs of higher growth from the investment.

Integration and implementation

AI implementation may be difficult in existing systems and workflows, especially if teams are resistant or lack proper training. These factors can also put a startup at risk of scams. 

For example, AI models need access to sensitive data. If personal information gets into the wrong hands, businesses and their customers may be susceptible to scammers. Bad players may use AI tools to create convincing deepfakes of people or communications to collect money. Others may use fraudulent chatbots impersonating customer service representatives to steal credit card information.

Also Read: Navigating the trust labyrinth: My perspective on ethical AI marketing

According to a Deloitte report, only 33 per cent of employees have received generative AI training, and 35 per cent say they weren’t satisfied with their learning. A company must ensure a clear strategy for AI integrations and prepare its employees for the change. 

Tips for startups to overcome these challenges

Technology startups must keep up with evolving AI advancements even as they find their footing. Companies should concentrate their investments in talent acquisition, data management and computing infrastructure for maximum returns. 

Integrating AI into a company’s business plan should focus on concrete outcomes and revenue. Seeking investors with AI knowledge and pursuing federal grants and funding programs — including crowdfunding — is another way to garner capital, test the market and reduce risk. 

A successful technology startup is only as good as those working there. Therefore, finding the best talent with AI expertise and providing comprehensive training and professional development is essential.

Additional suggestions for overcoming the challenges of AI in a technology startup include:

  • Explore public data platforms and exchanges.
  • Enhance training data by modifying existing points and creating new, quality data from scratch.
  • Implement stringent data management and security measures.
  • Utilise cloud computing for adaptability and scalability.
  • Improve AI model efficiency for the most productive resource utilisation.
  • Integrate AI with smaller, more concentrated projects, such as resolving specific business-related issues.
  • Make improvements to AI tools according to feedback and results.
  • Encourage employee and stakeholder engagement during AI implementation.
  • Support employees with AI training.

It is equally important to address potential biases in AI technology. Startup owners might consider launching an ethics committee or advisory board to establish responsible AI development and utilisation. The committee will review AI projects, detect possible biases, and prioritise transparency to build trust and manage risks.

Embracing AI in the startup landscape

As AI advances, startups should find ways to adopt it in practice. Although the challenges are valid, AI can transform businesses for the better. Considering startups must build themselves from the ground up, embracing AI responsibly and gradually is a sure path to success.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post Is AI making it harder for tech startups to survive? appeared first on e27.

Posted on

Singapore’s AI ambitions face crucial test amid economic and talent pressures

Singapore’s push to lead in artificial intelligence faces mounting headwinds as global economic pressures and persistent talent shortages undercut momentum.

According to a new survey by global HR and payroll platform Deel, conducted with Milieu Insight, 81 per cent of Singaporean companies report negative impacts from global tariffs, with many forced into difficult workforce decisions such as wage freezes, reduced hiring, and retrenchments.

Also Read: Southeast Asia’s AI divide: SleekFlow report warns of widening gap

The findings—based on responses from 350 business leaders across SMEs and large enterprises—reveal a critical inflection point for the island nation’s digital transformation agenda. Despite AI’s promise to boost productivity and efficiency, adoption remains uneven and cautious across the market.

Global shocks temper AI optimism

Singaporean businesses find themselves squeezed between escalating operational costs due to tariffs and the imperative to invest in innovation. More than half (56 per cent) of respondents cite increased costs, with AI-forward companies feeling this pinch even more acutely (86 per cent).

Yet, the potential benefits of artificial intelligence remain compelling. Companies leveraging AI report tangible gains: 71 per cent cite improved productivity, 61 per cent report operational optimisation, and 50 per cent realise cost savings. Nearly a third (31 per cent) have accelerated AI and automation in response to global instability—an indication that AI is seen as a resilience tool in volatile times.

Talent bottlenecks slow AI deployment

Even as the benefits of AI become clearer, Singapore’s talent pipeline lags behind. A staggering 68 per cent of businesses are still in the early stages of AI adoption, with only 12 per cent of SMEs reaching intermediate levels, compared to 43 per cent of larger enterprises.

Talent shortages are the main culprit. Nearly half of the respondents say local AI expertise is insufficient, and high salary expectations, limited career growth, and skill mismatches further hinder recruitment.

As a stopgap, 62 per cent of firms are open to hiring from overseas, but only 20 per cent have budgets set aside to reskill their current workforce—a disconnect that could stall sustainable progress.

“Talent remains the single biggest barrier to scaling AI,” said Nick Catino, Global Head of Policy at Deel. “Cross-border hiring can fill gaps, but must be paired with effective knowledge transfer to uplift local teams”.

Government support recognised but underutilised

Singapore has laid out comprehensive strategies to foster AI, including the National AI Strategy (NAIS 2.0). However, awareness and engagement remain low.

While 92 per centof businesses see government support as vital—particularly in funding and upskilling—only 5 per cent are actively engaging with existing AI frameworks. A striking 95 per cent say they are unfamiliar or only mildly familiar with the governance framework.

Also Read: AI and automation in Southeast Asia: Which jobs are at risk and which will thrive?

This lack of engagement comes despite calls for stronger regulatory guardrails from 57 per cent of the respondents. The gap suggests that while the government’s intent is clear, execution and awareness-building efforts need urgent reinforcement.

Aligning talent, policy, and tech for a future-ready Singapore

As the AI race intensifies, Singapore must bridge its knowledge and talent gaps to sustain its leadership. Proactive engagement with policy frameworks, robust upskilling strategies, and targeted AI investments will be essential. Only through this alignment can the city-state realise the transformative potential of AI—turning today’s headwinds into tomorrow’s competitive edge.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

The picture was generated by ChatGPT.

The post Singapore’s AI ambitions face crucial test amid economic and talent pressures appeared first on e27.