Posted on Leave a comment

Featherless.ai wants to make AI model switching as easy as streaming Netflix

Featherless.ai founder and CEO Eugene Cheah

Featherless.ai, a US-headquartered startup founded and led by Singapore-born CEO Eugene Cheah, has a blunt mission: make the messy, fast-changing world of open-source AI easy to run in production.

The company recently raised US$20 million in Series A funding co-led by AMD Ventures and Airbus Ventures, and plans to use the capital to scale global infrastructure, launch a marketplace for specialised open models and deepen hardware integrations to cut inference costs.

In plain English, Featherless helps companies run lots of open-source AI models quickly, cheaply and safely, without forcing them to rely on one giant model or on a single cloud vendor.

Also Read: Featherless.ai secures US$5M to make AI inference faster and cheaper

What sets it apart is an operational promise that sounds almost magical: hot-swapping models in under five seconds, compared with the typical 30 minutes on a GPU. It’s a capability that, if it works at scale, could change how organisations deploy models: from one-size-fits-all behemoths to specialised fleets tailored to discrete tasks.

How hot-swapping actually works

Cheah explains the technical rethink that enables rapid model swaps. “Most inference providers treat each model like a standalone deployment. Load the full weights, warm up the runtime, and serve. Each requires hours of setup. That works fine if you’re running one model. We run over 30,000. And we plan to scale to millions; you can’t have millions of GPUs on standby for every model,” Cheah says.

Featherless’s approach is a systems-level redesign. Models live in hot, warm or cold states across a multi-tier cache and memory-management layer covering the GPU fleet. When a request targets a model that isn’t resident, the platform “hydrates” it from a pre-optimised checkpoint rather than raw weights, an optimisation that dramatically reduces load time.

Three engineering pillars make this possible: normalising and quantising weights at ingest time, proprietary storage and memory-loading techniques for GPUs, and a demand-prediction scheduler that pre-stages models before requests arrive.

There are trade-offs. “The first inference on a freshly swapped model carries slightly higher latency, a few hundred milliseconds more than a model that’s been sitting warm for hours. In practice, users don’t notice. The real trade-off was engineering effort,” Cheah says. The payoff is higher utilisation and lower cost, especially in environments that require many specialised models rather than a single monolithic system.

Model pluralism in practice

Featherless pitches itself as an antidote to the “one-model-to-rule-them-all” mindset. The platform lets enterprises define intents —for example, code generation, German customer support, or compliance summarisation—and Featherless routes those intents to the best-fit model, with fallbacks and failover chains.

“Model pluralism should not mean operational pluralism,” Cheah says. “The whole point of 30,000 models is that you always get the right one. But the system delivering it should feel like one thing, not 30,000 things.”
Practically, customers run a thin orchestration layer that maps business tasks to Featherless endpoints; the platform handles selection, versioning and serving. Monitoring is unified around tasks rather than individual models, making A/B testing and swaps painless.

Quality, safety and languages

Offering a vast catalogue of open models creates obvious questions about safety, bias and multilingual performance. Featherless applies a layered curation approach: automated screening for licences and architecture checks, inference health tests, and surfaced metadata to help teams make informed choices. Enterprise customers can add stricter tiers: bias benchmarking, multilingual audits and consistency testing.

Also Read: Will the rise of AI mean the ‘termination’ of humankind?

“We don’t claim perfect parity; that would be dishonest given the state of the field,” Cheah says, acknowledging the uneven quality of models across languages. The firm’s history with RWKV (a model architecture designed for multilingual efficiency) informs both research and serving decisions. Featherless stresses transparency: training data provenance, benchmark results and limitations are made available so customers can match models to their needs.

Low-resource and morphologically complex languages pose extra challenges. There’s less high-quality training data, tokenisation can be inefficient and standard transformer architectures hit scaling limits for long contexts. Featherless evaluates models across language families with standardised benchmarks and works with customers to build task-specific evaluation datasets. The company is careful not to promise parity when the underlying data and modelling aren’t yet in place.

Sovereignty, hardware and regional strategy

Featherless frames “AI sovereignty” as a three-layer problem: data residency, model provenance and hardware dependency. On the first layer, the solution is straightforward: deploy where data must stay. On the second, open models make provenance auditable and replaceable. The third layer, hardware, is the trickiest: much of production AI today runs on a proprietary stack dominated by a single vendor.

“That’s why our AMD partnership and ROCm investment isn’t just commercial; it’s strategic,” Cheah says. Featherless aims to prove the stack can run on open hardware with open software, reducing vendor lock-in at the compute layer.

The company is bullish on Southeast Asia’s potential for AI: pragmatic regulation, mobile-first engineers accustomed to multilingual products, and geographic proximity to major compute hubs. The weak points are familiar — insufficient regional GPU capacity and shallower venture capital, and Cheah calls for public-private investment in compute and model development tailored to local needs.

Governance, audit trails and compliance

Featherless recognises enterprise concerns about reproducibility and auditability. Bitwise reproducibility across GPU runs is difficult due to non-deterministic floating-point behaviour; Featherless prioritises practical reproducibility. “Pinned model versions, fixed quantisation configs, seeded sampling parameters. Same model version + same config + same seed = same output,” Cheah says. The platform version tracks every model configuration and logs model IDs, version hashes, configurations, and routing metadata for each request. Enterprises can also opt for private deployments so data never leaves their perimeter.

Handling licences and problematic training data is treated as a transparency exercise rather than a legal shield. Models are classified by licence at ingest, customers see licence details up front, and enterprise customers can filter models by licence category. Featherless maintains a watch list for models with provenance concerns and highlights models trained on explicit public-domain or licensed datasets.

When one model fails

Cheah offers a concrete example to illustrate the costs of a single-model approach. A Series B fintech used a single large closed model for everything—chatbots, transaction categorisation, and compliance summarisation. Over time, costs ballooned, latency rose during peak traffic, and GDPR obligations complicated European expansion.

Also Read: AI adoption is an area of maturity for SMEs, but they have advantage over big corporations: Aicadium

After decomposing workloads across Featherless, the company saw roughly a 65 per cent reduction in total inference costs and substantial latency improvements: conversational workloads were moved to a smaller, faster model (latency down 70 per cent, cost down 80 per cent for that workload), compliance tasks ran on a long-context model in the EU, and categorisation moved to a lightweight classifier. Importantly, governance became tractable.

Risks and the road ahead

Cheah is candid about the threats to Featherless’s thesis: hyperscalers undercutting pricing, consolidation of model development, hardware disruptions and an edge shift where devices handle more inference. His response is to double down on neutrality, breadth of catalogue, optimisation depth and vendor-agnostic engineering. “Open models win, inference needs to be efficient, neutrality matters. Those hold regardless of which specific risk plays out,” he says.

Featherless’s bet is operational: make it trivial to run many open models reliably, cheaply and compliantly across geographies and hardware. If that works, customers can stop shoehorning every problem into a single massive model and instead use the right tool for each job. It’s a practical vision that leans on engineering rather than hype — and that may be precisely what enterprises need as the AI landscape fragments into dozens, hundreds or thousands of specialised models.

The post Featherless.ai wants to make AI model switching as easy as streaming Netflix appeared first on e27.

Posted on Leave a comment

The truth behind the CLARITY Act lobby blitz: Crypto to the moon or banks compromise

The digital asset market currently reflects a complex tapestry of legislative hope and aggressive capital rotation. Total market valuation climbed 2.08 per cent in just 24 hours, reaching US$2.74T. This move aligns closely with traditional finance, as evidenced by an 87 per cent 30-day correlation with the S&P 500 index. While many observers look to pure technical indicators, the underlying strength stems from a growing belief that the CLARITY Act will finally establish a federal framework for the industry.

This optimism acts as a tailwind for prices even as a shadow looms in the form of a last-minute offensive from the traditional banking sector. The current rally suggests that participants are beginning to price in the possibility of a regulated future, even as the establishment fights to maintain its grip on dollar deposits and payment flows.

Capital is clearly searching for higher returns beyond the established giants. The Altcoin Season Index jumped 4.26 per cent in 24 hours and 22.5 per cent over the week to reach a level of 49. This indicates a significant shift in trader behaviour, as capital flows into higher-beta assets with specific growth stories. Sui serves as a prime example of this trend, as its price surged by over 24 per cent. A Nasdaq-listed firm decided to stake 108.7M tokens, which represents 2.7 per cent of the total supply.

This move created an immediate supply shock by removing millions of tokens from the active sell side. Combined with the announcement that African fintech giant Paga would integrate with the Sui network, the asset demonstrated that targeted adoption news now outweighs general market movements. Traders are no longer just buying the broad market. They are hunting for specific catalysts and supply dynamics that can deliver outsized gains.

Bitcoin itself continues to hold the line at US$82,139.04, marking a 1.83 per cent increase that tracks the broader market cap rise of 1.88 per cent. Trading volume for the leading asset spiked by 48.97 per cent. This confirms that the break above the US$82,000 psychological level has weight and attracts both retail and institutional participation. Data from derivatives markets suggests that leverage played a heavy hand in this climb. Open interest for Bitcoin futures surged past the previous all-time high set in 2025.

This influx of leveraged positions triggered a classic short squeeze, with short liquidations totaling US$23.93M in 24 hours. This represents a 16.67 per cent increase over the previous period. When short sellers face forced buybacks, they inadvertently push prices higher, creating a cascade of upward pressure. This feedback loop benefits spot holders but also increases the risk of a sudden reversal if the market becomes overextended on borrowed capital.

Also Read: Agentic economy: The real promise of AI and crypto convergence

Market indicators provide a nuanced view of this momentum. Data highlights that while the 14-day Relative Strength Index sits at 68.43, it has not yet hit the extreme levels that typically signal an immediate crash. Bitcoin dominance holds steady near 60.15 per cent. This suggests that the rally has not yet fully rotated capital into smaller tokens, despite gains in the altcoin sector. Social sentiment remains bullish with a net score of 5.21 out of 10.

Traders consistently highlight profitable trades in the altcoin market. Total open interest across all assets rose 6.07 per cent to reach US$451.72B. This shows that new money is entering the derivatives space to bet on further gains. These bets amplify price moves and ensure that volatility remains a constant companion for those navigating these markets.

The regulatory landscape remains the most potent driver for long-term sentiment and institutional trust. The CLARITY Act represents a rare moment of bipartisan cooperation between Senators Thom Tillis and Angela Alsobrooks. Their hard-won compromise focuses on a critical distinction for stablecoins. It prohibits passive, deposit-style interest but allows rewards tied to actual usage, transactions, or liquidity provision.

This framework would allow the industry to flourish while theoretically protecting consumers from the risks associated with unregulated shadow banking. Prediction markets like Polymarket now place the odds of passage at 75 per cent. Public support appears robust, with a HarrisX poll showing 52 per cent of voters favour the move. This legislation aims to reshore digital asset activity to American venues. Such a move could potentially end the dominance of offshore issuers like Tether and bring innovation back to domestic soil.

Traditional financial organisations are not watching these developments with indifference or passivity. Just 4 days before the May 14 Senate Banking Committee markup, powerful trade groups, including the American Bankers Association and the Bank Policy Institute, launched a concerted effort to derail the yield compromise. These organisations sent a joint letter urging senators to scrap the rewards carve-out entirely.

While they publicly cite consumer protection concerns, their internal analysis reveals a deeper fear about their own profit margins. These banks warn that yield-bearing stablecoins could drain enough liquidity from the traditional system to reduce consumer, small-business, and farm lending by 20 per cent or more. This battle is essentially a struggle for control over the future of dollar deposits and the rails of the global payments system.

The outcome of this markup will determine whether non-bank issuers retain the room they need for innovation or whether the United States remains with its current fragmented regime.

Also Read: Crypto-gold correlation hits 69%: Where smart money is rotating next

Timing is now the greatest risk for the pro-crypto camp and the broader market structure. If the Senate Banking Committee advances the bill without reopening the fight over yields, a July 4 signing target at the White House remains a realistic possibility. If the banking lobby successfully delays the markup beyond the May 21 Memorial Day recess, the entire effort could reset and lose its momentum.

Policy experts warn that missing this window could delay the development of clear rules until a new Congress takes office in the coming years. This uncertainty explains why social sentiment remains cautiously bullish at 5.21 out of 10. Traders are celebrating recent gains but remain wary of the political hurdles that lie ahead. The market is at an inflection point, where the durability of the current rotation hinges on whether leadership can maintain momentum amid institutional pushback from legacy finance.

Investors should recognise that this rally is not just a random price fluctuation. It is a reaction to a specific legislative shift that threatens the traditional banking monopoly. The push by banks to strip stablecoin rewards from the CLARITY Act proves that they see digital assets as a legitimate threat to their lending models and deposit bases. If the act passes in its current form, it will validate the point of view that clear rules and usage-based rewards are the true catalysts for the next phase of growth.

For now, the market is betting that the senators will hold their ground against the banking lobby. If they succeed, the shift of capital from Bitcoin into select altcoins with strong narratives will likely continue. If they fail, the industry may have to wait much longer for the clarity it needs to fully integrate with the global financial system and move away from its offshore roots.

The clash between the crypto market and the banking sector is reaching a boiling point. This is healthy for the end user, as it drives innovation and offers more choices about where and how to hold value. The coming weeks will reveal whether the legislative process can withstand the pressure from established interests or yield to the status quo. If the current momentum holds, we are witnessing the birth of a new era in digital finance.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The truth behind the CLARITY Act lobby blitz: Crypto to the moon or banks compromise appeared first on e27.

Posted on Leave a comment

Amplicity raises US$1M to turn idle backup batteries into profit engines

L-R: Amplicity co-founders Gabriel Schiano (CTO) and Stéphane Leyo (CEO)

Singapore-based Amplicity has secured US$1 million in a seed investment round from investors, including ENGIE, to commercialise a simple but increasingly compelling idea: the batteries sitting inside data centres and industrial facilities should not be treated as expensive ornaments waiting for a blackout.

The startup builds a control layer that allows sites to use existing or planned battery systems, including UPS infrastructure, to cut electricity costs and earn revenue from energy markets without undermining backup readiness. The timing is perfect as operators across Asia Pacific are currently being squeezed from several directions at once: power prices remain volatile, grids are under pressure, and large energy users are facing sharper scrutiny over Scope 2 emissions.

Also Read: The surprising economics of orbital data centres — and the real solution

According to Amplicity co-founders, most backup batteries sit idle for more than 99 per cent of the time. In a region racing to build more data centres and industrial capacity, that is a lot of underused capital.

“For years, backup energy systems like UPS have been treated as passive insurance: essential but unproductive,” CEO Stéphane Leyo said.

That framing is neat, but the bigger story is not about idle hardware. It is about whether Asia’s next wave of energy infrastructure will be built from scratch or sourced from existing assets.

A regional problem hiding in plain sight

Amplicity is targeting a pain point that is especially visible in Southeast Asia. The region’s electricity demand is still climbing, while its digital infrastructure footprint is expanding fast. Singapore remains one of Asia’s most important data centre hubs even under tighter efficiency rules, while nearby Johor and Batam are benefiting from spillover demand. Indonesia is building out its own data centre and industrial estate capacity.

Australia, meanwhile, has become one of the world’s most active markets for battery economics, thanks to its volatile wholesale power market and mature ancillary services opportunities.

In all these markets, resilience matters. Data centres, semiconductor plants, logistics facilities and large industrial sites cannot afford downtime. That means backup batteries are already widespread. The problem is that they are usually sized for emergencies, then left untouched except for periodic testing.

From an engineering perspective, that has long made sense. From an economic perspective, however, it increasingly looks wasteful.

That is the opening Amplicity wants to exploit. Its software sits on top of those battery assets. It aims to do two things at once:

  1. Shave costly on-site demand peaks
  2. Where market rules allow, dispatch battery capacity into energy or grid-service markets to generate recurring income.

The addressable opportunity is not small. Asia Pacific is one of the fastest-growing regions for both stationary storage and data centre construction. The data centre UPS market alone is already worth billions of US dollars globally, with Asia accounting for a meaningful share. Add commercial and industrial battery systems, and the battery hardware footprint that could theoretically be optimised runs into the many billions. The software, services, and revenue-sharing layer built on top of that is easily a large regional opportunity in its own right.

Also Read: The AI server boom in Southeast Asia: Why data centres are running out of power

In Southeast Asia specifically, the total addressable market (TAM) is less about selling more batteries than about monetising batteries that are already being installed for resilience or compliance reasons. That makes the sales motion more attractive in a capital-constrained environment.

Why Singapore and Australia matter

Amplicity’s initial focus on Singapore and Australia is not accidental.
Its home market Singapore offers a dense concentration of exactly the kind of customer the company wants: energy-intensive, uptime-obsessed operators under pressure to improve efficiency and decarbonise. Data centres in the city-state face land constraints, regulatory scrutiny and high expectations around energy performance. If Amplicity can prove that UPS systems can be run as economic assets without compromising mission-critical operations, Singapore becomes a strong reference market.

Australia is different, but arguably even more lucrative in the short term. Its electricity market is far more dynamic, with greater price swings and a deeper set of opportunities for batteries to earn money through arbitrage and grid services. A battery that is economically attractive in Singapore can become materially more valuable in Australia if it is exposed to the right market signals. For a startup trying to show hard returns, this is crucial.

Together, the two markets provide a useful test bed: Singapore for operational credibility with demanding customers, Australia for energy-market monetisation.

ENGIE’s upside goes beyond venture optics

ENGIE’s continued presence on Amplicity’s cap table is also strategically important. For the French energy giant, backing a company like Amplicity is a way to deepen its position in distributed energy, behind-the-meter optimisation and customer-facing decarbonisation services.

ENGIE already operates across energy supply, services and infrastructure. A company like Amplicity gives it another lever: the ability to unlock flexibility from customer-owned battery fleets without having to fund or own all the underlying hardware. If those batteries can be orchestrated safely at scale, ENGIE benefits from a stronger customer proposition, new service revenues and potentially more flexibility to support energy trading or retail operations where regulations permit.

In plain English, Amplicity gives ENGIE a software-led route to value that would otherwise remain trapped in backup systems.

Not a white space market

Amplicity is not entering an empty field. Globally, energy storage optimisation and distributed energy management are already crowded categories. Fluence, Stem, Wärtsilä, Schneider Electric, Eaton, ABB and Vertiv all operate somewhere along the spectrum of battery control, microgrid management, site energy optimisation or resilience infrastructure. Some of them are enormous. Schneider Electric, ABB and Eaton are industrial heavyweights with global reach, while Fluence has built a large listed energy storage platform. Stem became one of the better-known software-led storage players in the United States, even if that segment has had a bruising few years.

In Asia and Australia, the picture is similarly active. Utilities, aggregators, and energy service providers already monetise batteries through virtual power plants, demand response programmes, and ancillary services markets. What makes Amplicity slightly different is the narrowness of its wedge. Rather than leading with new battery deployments, it focuses on extracting value from backup and UPS assets customers already have or were going to buy anyway.

That distinction matters because mission-critical operators are often willing to consider software layers and performance-based commercial models long before they are willing to rip out their energy architecture.

The decarbonisation case is real, but not automatic

Amplicity also pitches a climate angle, and this deserves a more sober reading than startup boilerplate usually gets.

Batteries do not reduce emissions by default. If they charge from a fossil-heavy grid at the wrong time and discharge later without displacing dirtier generation, the decarbonisation benefit can be limited. The value comes from how they are controlled.

Amplicity’s case is that smarter battery dispatch can reduce peak demand, shift consumption away from more carbon-intensive periods, help integrate more renewable power and reduce the need for peaking generation. For companies measured on Scope 2 emissions, that can translate into verifiable improvements, especially if battery operation is tied to auditable reporting. In data centres and industrial sites, where electricity demand is both large and visible, even modest efficiency and load-shifting gains can matter.

Also Read: The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis?

That is why this is more than a niche optimisation play. It sits at the intersection of energy cost management, grid flexibility and corporate decarbonisation.

US$1 million is a modest round by clean-tech standards, and Amplicity still has to prove that site operators will trust a young company with assets designed for worst-case scenarios. But the thesis is hard to dismiss. Asia is adding more batteries, not fewer. The grid is becoming more complex, not less. And businesses are less willing than ever to leave expensive infrastructure idle just because that used to be standard practice.

For Amplicity, the bet is that the next big energy asset in the region is not a shiny new battery farm. It is the one already sitting in the basement, waiting for somebody to give it a job.

The post Amplicity raises US$1M to turn idle backup batteries into profit engines appeared first on e27.

Posted on Leave a comment

Singapore’s AI tools are ready. Its workforce isn’t

Singapore’s businesses have largely figured out how to buy AI tools. What they have not figured out is what to do with the humans sitting next to them.

That is the central finding of a sweeping new report by Accenture, released Monday, which warns that Singapore’s AI ambitions risk stalling not because of a tech gap, but a people one. Titled Singapore’s Growth Mandate: Why the AI future will be won or lost on people, not technology, the report draws on four research streams conducted between December 2025 and February 2026, and paints a picture of a nation caught between digital momentum and human inertia.

The headline numbers are, on the surface, encouraging. Nine in 10 Singapore enterprises have moved beyond merely exploring AI tools into active implementation. Nearly half have deployed generative AI within specific business units, and nearly three quarters are experimenting with or exploring agentic AI.

But peel back the tech layer, and the numbers grow uncomfortable. Only one in three organisations has a talent strategy that is fully aligned with its AI strategy. Nearly half of tech leaders surveyed admitted their companies had yet to redesign job roles or responsibilities at all.

Also Read: Amplicity raises US$1M to turn idle backup batteries into profit engines

The cost of this misalignment is quantifiable. Organisations that placed people at the centre of their AI transformation in 2025 grew revenue 1.8 percentage points higher and profits 1.4 percentage points higher than peers that did not. In a market as competitive as Singapore’s, that is not a rounding error.

Young workers, old assumptions

Perhaps the sharpest finding concerns the country’s entry-level workforce, a cohort that is ambitious, digitally native and, according to the research, being quietly set up to fail.

Entry-level job postings rebounded eight per cent in 2025, suggesting the labour market is upgrading rather than collapsing. But what those roles demand has shifted dramatically. Postings for entry-level ICT positions fell 38 per cent between 2022 and 2025, while demand for AI, machine learning, and data management skills accelerated sharply. Routine, repeatable tasks are being compressed. Roles that combine domain knowledge, analytical reasoning, and the ability to deploy AI tools are expanding.

Young Singaporeans sense the shift. Fully 95 per cent believe Singapore’s ambition to lead in AI is achievable. Yet just 31 per cent strongly agree that the ambition is genuinely people-centric. Their anxiety is specific: 81 per cent report beginner-level or zero understanding of prompt engineering — the skill most commonly cited as a gap — and 80 per cent report the same when it comes to AI ethics and governance. Nearly half worry about keeping pace with the speed of AI change.

The report reserves its most striking finding for last. Only 23 per cent of Singaporean employees genuinely trust their employer to act in their best interest when introducing AI tools, a figure that sits in stark contrast to a global benchmark of 83% from Accenture’s separate Pulse of Change research.

Also Read: Featherless.ai wants to make AI model switching as easy as streaming Netflix

That gap is not merely a morale problem. Trust, the report argues, is a hard operational requirement. When employees do not believe their organisations will invest in them through an AI transition, they disengage from upskilling. And 47 per cent of respondents already identify a lack of leadership support as the single biggest barrier to building AI fluency effectively.

Accenture frames the challenge as a leadership imperative, not a human resources task. Mark Tham, Accenture’s Country Managing Director for Singapore, said business leaders must elevate talent strategies to an equal — or greater — priority than technology adoption, noting the country has largely mastered deploying AI tools but has yet to grapple seriously with redesigning the work around them.

Prime Minister Lawrence Wong’s Budget 2026 pledge of “no jobless growth” in the AI era gives the report a pointed political context. Accenture’s conclusion is blunt: Singapore’s CEOs are, in effect, the implementation layer of that national mandate. The algorithms are ready. The question is whether the organisations built around them are.

Image Credit: Annie Spratt on Unsplash

The post Singapore’s AI tools are ready. Its workforce isn’t appeared first on e27.

Posted on Leave a comment

Your AI strategy isn’t broken, your leadership structure is

I’ve sat in enough AI strategy meetings to know what the real problem is. It’s not the technology.

Your AI strategy isn’t broken. Your leadership structure is. 95 per cent of AI pilots fail to deliver ROI. The problem isn’t the model — it’s who owns the decision, and whether they can explain it.

The models work. They genuinely do. I’ve watched teams demo AI systems that are genuinely impressive — fast, accurate, commercially relevant. And then I’ve watched those same teams, six months later, quietly shelve the project because no one could answer a simple board question: who’s accountable when this goes wrong?

That question kills more AI initiatives than bad data ever will.

The 2026 AI & Data Leadership Benchmark puts a number on it: 95 per cent of AI pilots fail to produce measurable business value. Read that again. Not 30 per cent. Not 50 per cent. Ninety-five. And in almost every postmortem I’ve seen, the failure wasn’t technical. It was structural — absent accountability, unexplainable decisions, and governance that arrived about eighteen months too late.

Here are the five places where that structure breaks — and what it actually takes to fix it.

You can’t explain the decision. That’s your problem now.

There’s a version of this that sounds abstract until it happens to you. Your AI system made a decision — a pricing call, a credit rejection, a hiring shortlist — and now someone is asking you to explain it. Not the engineering team. You.

The National CIO Review found that 90 per cent of CIOs say their professional reputation now directly depends on AI outcomes. And 85 per cent say missing traceability has already killed or stalled projects they were responsible for.

This is the trap nobody talks about clearly enough: when you adopt AI for its speed, you inherit accountability for outcomes you may not be able to reconstruct. The model moved fast. The audit trail didn’t.

What actually helps:

Build the decision trail before the first production deployment — not after the first crisis. Name a human owner for every AI system that touches pricing, hiring, credit, or customer outcomes. Make sure that the person can explain the decision logic in plain language to a board that didn’t ask for a technical briefing.

Your autonomous agent is making decisions. Does anyone know in what order?

Agentic AI is the shift that snuck up on a lot of leaders. We went from “AI that recommends” to “AI that acts” faster than most governance frameworks could follow. These systems now schedule meetings, draft contracts, initiate vendor communications, and escalate purchase orders — all without a human in the loop.

Forrester estimates 60 per cent of Fortune 100 firms will appoint a dedicated Head of AI Governance by the end of 2026, specifically because of agentic risk. That’s not a trend. That’s a fire alarm.

Also Read: Burning billions: AI’s capital frenzy and its global implications

The question I ask every leadership team deploying agentic AI is simple: What’s the trigger? At what point does the agent pause and ask a human? Most teams don’t have a clean answer. That gap is where the expensive mistakes happen.

What actually helps:

Before you go live, document the categories of action that require human confirmation. Build an immutable log of what the agent does. Test the rollback. The escalation protocol isn’t a nice-to-have — it’s the only thing standing between you and a consequential autonomous decision you can’t reverse.

When everyone owns AI, a board can smell it immediately.

I’ve heard this framing in more organisations than I can count: “AI is a shared responsibility across the business.” It sounds collaborative. It is, in practice, a way of ensuring no one is actually responsible.

Only 38 per cent of companies have a unified AI leadership role. Of the organisations consistently reporting strong AI ROI, nearly all have one thing in common: a single person accountable for AI outcomes — not AI tools, not AI infrastructure, AI outcomes — with a direct line to the CEO.

JPMorgan put its AI executive on a 14-person operating committee reporting directly to Jamie Dimon. That’s not a benchmark to aspire to. That’s the baseline for what serious AI governance looks like in 2026.

What actually helps:

Stop distributing accountability as if it were a feature. Appoint one person. Give them the CEO reporting line. The internal fight over whether this role belongs in technology or in business is worth having — because whoever wins that fight is setting the strategic agenda for the next decade.

AI is answering faster than you’re thinking. That’s a design flaw, not a feature.

UNSW Business School research tracks over 150 cognitive biases that affect human decision-making. The one that concerns me most in an AI context is anchoring — the way we over-weight the first recommendation we encounter. When AI surfaces a recommendation in milliseconds, it anchors your thinking before your own independent analysis has even started.

McKinsey puts it precisely: AI is most valuable when it augments human judgment. But its speed creates structural conditions that suppress the formation of independent judgment in the first place. I’ve watched executives nodding along to AI recommendations they hadn’t actually interrogated — not because they were lazy, but because the workflow was designed to move fast and the AI output was already on the screen.

What actually helps:

Redesign the sequence. Human analysis first. AI recommendation second. For any decision that’s high-stakes and hard to reverse, make the independent view a governance requirement — not a personal discipline that gets skipped when the calendar fills up.

Also Read: GenAI adoption is rising in Asia, but ROI remains elusive: Adobe

Governance built early is a moat. Governance built after a failure is just damage control.

The framing I hear most often is wrong: that governance slows AI down, that it creates friction, that it’s something you layer on once you’ve proven the use case. Every organisation I’ve seen operate from that assumption has eventually paid the price — in stalled deployments, board confidence erosion, or a regulatory intervention that arrived without warning.

The organisations generating genuine AI ROI in 2026 built governance early — before regulation demanded it, before a failure event required it. They report faster deployment cycles now, not slower. Because the board trust was already there. The audit trail was already there. When regulatory scrutiny arrived, it was a conversation, not a crisis.

What actually helps:

Take the governance investment to your board as a compounding asset with a payback period — not a compliance cost. The EU AI Act and its equivalents are converting what was a differentiator into a compliance floor. Build now, and you’re ahead. Build when you’re forced to, and you’re just keeping up.

Also Read: Singapore’s AI adoption surges, but data complexity raises security risks: Report

The position worth taking

If I had to distil everything above into one claim worth staking a career on, it’s this:

AI is a governance problem disguised as a technology problem. And the leaders who solve governance first are the ones who will still be standing when the dust settles.

That’s not a framework from a consulting deck. It’s what the evidence actually shows — across MIT Sloan, McKinsey, the National CIO Review, Harvard, and IBM — when you read it without the vendor framing.

The five intersections above aren’t abstract. They’re the decisions sitting on your desk right now. How you treat them — as technology problems or as leadership problems — will define what your AI story looks like in two years’ time.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Your AI strategy isn’t broken, your leadership structure is appeared first on e27.