Posted on Leave a comment

Featherless.ai wants to make AI model switching as easy as streaming Netflix

Featherless.ai founder and CEO Eugene Cheah

Featherless.ai, a US-headquartered startup founded and led by Singapore-born CEO Eugene Cheah, has a blunt mission: make the messy, fast-changing world of open-source AI easy to run in production.

The company recently raised US$20 million in Series A funding co-led by AMD Ventures and Airbus Ventures, and plans to use the capital to scale global infrastructure, launch a marketplace for specialised open models and deepen hardware integrations to cut inference costs.

In plain English, Featherless helps companies run lots of open-source AI models quickly, cheaply and safely, without forcing them to rely on one giant model or on a single cloud vendor.

Also Read: Featherless.ai secures US$5M to make AI inference faster and cheaper

What sets it apart is an operational promise that sounds almost magical: hot-swapping models in under five seconds, compared with the typical 30 minutes on a GPU. It’s a capability that, if it works at scale, could change how organisations deploy models: from one-size-fits-all behemoths to specialised fleets tailored to discrete tasks.

How hot-swapping actually works

Cheah explains the technical rethink that enables rapid model swaps. “Most inference providers treat each model like a standalone deployment. Load the full weights, warm up the runtime, and serve. Each requires hours of setup. That works fine if you’re running one model. We run over 30,000. And we plan to scale to millions; you can’t have millions of GPUs on standby for every model,” Cheah says.

Featherless’s approach is a systems-level redesign. Models live in hot, warm or cold states across a multi-tier cache and memory-management layer covering the GPU fleet. When a request targets a model that isn’t resident, the platform “hydrates” it from a pre-optimised checkpoint rather than raw weights, an optimisation that dramatically reduces load time.

Three engineering pillars make this possible: normalising and quantising weights at ingest time, proprietary storage and memory-loading techniques for GPUs, and a demand-prediction scheduler that pre-stages models before requests arrive.

There are trade-offs. “The first inference on a freshly swapped model carries slightly higher latency, a few hundred milliseconds more than a model that’s been sitting warm for hours. In practice, users don’t notice. The real trade-off was engineering effort,” Cheah says. The payoff is higher utilisation and lower cost, especially in environments that require many specialised models rather than a single monolithic system.

Model pluralism in practice

Featherless pitches itself as an antidote to the “one-model-to-rule-them-all” mindset. The platform lets enterprises define intents —for example, code generation, German customer support, or compliance summarisation—and Featherless routes those intents to the best-fit model, with fallbacks and failover chains.

“Model pluralism should not mean operational pluralism,” Cheah says. “The whole point of 30,000 models is that you always get the right one. But the system delivering it should feel like one thing, not 30,000 things.”
Practically, customers run a thin orchestration layer that maps business tasks to Featherless endpoints; the platform handles selection, versioning and serving. Monitoring is unified around tasks rather than individual models, making A/B testing and swaps painless.

Quality, safety and languages

Offering a vast catalogue of open models creates obvious questions about safety, bias and multilingual performance. Featherless applies a layered curation approach: automated screening for licences and architecture checks, inference health tests, and surfaced metadata to help teams make informed choices. Enterprise customers can add stricter tiers: bias benchmarking, multilingual audits and consistency testing.

Also Read: Will the rise of AI mean the ‘termination’ of humankind?

“We don’t claim perfect parity; that would be dishonest given the state of the field,” Cheah says, acknowledging the uneven quality of models across languages. The firm’s history with RWKV (a model architecture designed for multilingual efficiency) informs both research and serving decisions. Featherless stresses transparency: training data provenance, benchmark results and limitations are made available so customers can match models to their needs.

Low-resource and morphologically complex languages pose extra challenges. There’s less high-quality training data, tokenisation can be inefficient and standard transformer architectures hit scaling limits for long contexts. Featherless evaluates models across language families with standardised benchmarks and works with customers to build task-specific evaluation datasets. The company is careful not to promise parity when the underlying data and modelling aren’t yet in place.

Sovereignty, hardware and regional strategy

Featherless frames “AI sovereignty” as a three-layer problem: data residency, model provenance and hardware dependency. On the first layer, the solution is straightforward: deploy where data must stay. On the second, open models make provenance auditable and replaceable. The third layer, hardware, is the trickiest: much of production AI today runs on a proprietary stack dominated by a single vendor.

“That’s why our AMD partnership and ROCm investment isn’t just commercial; it’s strategic,” Cheah says. Featherless aims to prove the stack can run on open hardware with open software, reducing vendor lock-in at the compute layer.

The company is bullish on Southeast Asia’s potential for AI: pragmatic regulation, mobile-first engineers accustomed to multilingual products, and geographic proximity to major compute hubs. The weak points are familiar — insufficient regional GPU capacity and shallower venture capital, and Cheah calls for public-private investment in compute and model development tailored to local needs.

Governance, audit trails and compliance

Featherless recognises enterprise concerns about reproducibility and auditability. Bitwise reproducibility across GPU runs is difficult due to non-deterministic floating-point behaviour; Featherless prioritises practical reproducibility. “Pinned model versions, fixed quantisation configs, seeded sampling parameters. Same model version + same config + same seed = same output,” Cheah says. The platform version tracks every model configuration and logs model IDs, version hashes, configurations, and routing metadata for each request. Enterprises can also opt for private deployments so data never leaves their perimeter.

Handling licences and problematic training data is treated as a transparency exercise rather than a legal shield. Models are classified by licence at ingest, customers see licence details up front, and enterprise customers can filter models by licence category. Featherless maintains a watch list for models with provenance concerns and highlights models trained on explicit public-domain or licensed datasets.

When one model fails

Cheah offers a concrete example to illustrate the costs of a single-model approach. A Series B fintech used a single large closed model for everything—chatbots, transaction categorisation, and compliance summarisation. Over time, costs ballooned, latency rose during peak traffic, and GDPR obligations complicated European expansion.

Also Read: AI adoption is an area of maturity for SMEs, but they have advantage over big corporations: Aicadium

After decomposing workloads across Featherless, the company saw roughly a 65 per cent reduction in total inference costs and substantial latency improvements: conversational workloads were moved to a smaller, faster model (latency down 70 per cent, cost down 80 per cent for that workload), compliance tasks ran on a long-context model in the EU, and categorisation moved to a lightweight classifier. Importantly, governance became tractable.

Risks and the road ahead

Cheah is candid about the threats to Featherless’s thesis: hyperscalers undercutting pricing, consolidation of model development, hardware disruptions and an edge shift where devices handle more inference. His response is to double down on neutrality, breadth of catalogue, optimisation depth and vendor-agnostic engineering. “Open models win, inference needs to be efficient, neutrality matters. Those hold regardless of which specific risk plays out,” he says.

Featherless’s bet is operational: make it trivial to run many open models reliably, cheaply and compliantly across geographies and hardware. If that works, customers can stop shoehorning every problem into a single massive model and instead use the right tool for each job. It’s a practical vision that leans on engineering rather than hype — and that may be precisely what enterprises need as the AI landscape fragments into dozens, hundreds or thousands of specialised models.

The post Featherless.ai wants to make AI model switching as easy as streaming Netflix appeared first on e27.

Posted on Leave a comment

The truth behind the CLARITY Act lobby blitz: Crypto to the moon or banks compromise

The digital asset market currently reflects a complex tapestry of legislative hope and aggressive capital rotation. Total market valuation climbed 2.08 per cent in just 24 hours, reaching US$2.74T. This move aligns closely with traditional finance, as evidenced by an 87 per cent 30-day correlation with the S&P 500 index. While many observers look to pure technical indicators, the underlying strength stems from a growing belief that the CLARITY Act will finally establish a federal framework for the industry.

This optimism acts as a tailwind for prices even as a shadow looms in the form of a last-minute offensive from the traditional banking sector. The current rally suggests that participants are beginning to price in the possibility of a regulated future, even as the establishment fights to maintain its grip on dollar deposits and payment flows.

Capital is clearly searching for higher returns beyond the established giants. The Altcoin Season Index jumped 4.26 per cent in 24 hours and 22.5 per cent over the week to reach a level of 49. This indicates a significant shift in trader behaviour, as capital flows into higher-beta assets with specific growth stories. Sui serves as a prime example of this trend, as its price surged by over 24 per cent. A Nasdaq-listed firm decided to stake 108.7M tokens, which represents 2.7 per cent of the total supply.

This move created an immediate supply shock by removing millions of tokens from the active sell side. Combined with the announcement that African fintech giant Paga would integrate with the Sui network, the asset demonstrated that targeted adoption news now outweighs general market movements. Traders are no longer just buying the broad market. They are hunting for specific catalysts and supply dynamics that can deliver outsized gains.

Bitcoin itself continues to hold the line at US$82,139.04, marking a 1.83 per cent increase that tracks the broader market cap rise of 1.88 per cent. Trading volume for the leading asset spiked by 48.97 per cent. This confirms that the break above the US$82,000 psychological level has weight and attracts both retail and institutional participation. Data from derivatives markets suggests that leverage played a heavy hand in this climb. Open interest for Bitcoin futures surged past the previous all-time high set in 2025.

This influx of leveraged positions triggered a classic short squeeze, with short liquidations totaling US$23.93M in 24 hours. This represents a 16.67 per cent increase over the previous period. When short sellers face forced buybacks, they inadvertently push prices higher, creating a cascade of upward pressure. This feedback loop benefits spot holders but also increases the risk of a sudden reversal if the market becomes overextended on borrowed capital.

Also Read: Agentic economy: The real promise of AI and crypto convergence

Market indicators provide a nuanced view of this momentum. Data highlights that while the 14-day Relative Strength Index sits at 68.43, it has not yet hit the extreme levels that typically signal an immediate crash. Bitcoin dominance holds steady near 60.15 per cent. This suggests that the rally has not yet fully rotated capital into smaller tokens, despite gains in the altcoin sector. Social sentiment remains bullish with a net score of 5.21 out of 10.

Traders consistently highlight profitable trades in the altcoin market. Total open interest across all assets rose 6.07 per cent to reach US$451.72B. This shows that new money is entering the derivatives space to bet on further gains. These bets amplify price moves and ensure that volatility remains a constant companion for those navigating these markets.

The regulatory landscape remains the most potent driver for long-term sentiment and institutional trust. The CLARITY Act represents a rare moment of bipartisan cooperation between Senators Thom Tillis and Angela Alsobrooks. Their hard-won compromise focuses on a critical distinction for stablecoins. It prohibits passive, deposit-style interest but allows rewards tied to actual usage, transactions, or liquidity provision.

This framework would allow the industry to flourish while theoretically protecting consumers from the risks associated with unregulated shadow banking. Prediction markets like Polymarket now place the odds of passage at 75 per cent. Public support appears robust, with a HarrisX poll showing 52 per cent of voters favour the move. This legislation aims to reshore digital asset activity to American venues. Such a move could potentially end the dominance of offshore issuers like Tether and bring innovation back to domestic soil.

Traditional financial organisations are not watching these developments with indifference or passivity. Just 4 days before the May 14 Senate Banking Committee markup, powerful trade groups, including the American Bankers Association and the Bank Policy Institute, launched a concerted effort to derail the yield compromise. These organisations sent a joint letter urging senators to scrap the rewards carve-out entirely.

While they publicly cite consumer protection concerns, their internal analysis reveals a deeper fear about their own profit margins. These banks warn that yield-bearing stablecoins could drain enough liquidity from the traditional system to reduce consumer, small-business, and farm lending by 20 per cent or more. This battle is essentially a struggle for control over the future of dollar deposits and the rails of the global payments system.

The outcome of this markup will determine whether non-bank issuers retain the room they need for innovation or whether the United States remains with its current fragmented regime.

Also Read: Crypto-gold correlation hits 69%: Where smart money is rotating next

Timing is now the greatest risk for the pro-crypto camp and the broader market structure. If the Senate Banking Committee advances the bill without reopening the fight over yields, a July 4 signing target at the White House remains a realistic possibility. If the banking lobby successfully delays the markup beyond the May 21 Memorial Day recess, the entire effort could reset and lose its momentum.

Policy experts warn that missing this window could delay the development of clear rules until a new Congress takes office in the coming years. This uncertainty explains why social sentiment remains cautiously bullish at 5.21 out of 10. Traders are celebrating recent gains but remain wary of the political hurdles that lie ahead. The market is at an inflection point, where the durability of the current rotation hinges on whether leadership can maintain momentum amid institutional pushback from legacy finance.

Investors should recognise that this rally is not just a random price fluctuation. It is a reaction to a specific legislative shift that threatens the traditional banking monopoly. The push by banks to strip stablecoin rewards from the CLARITY Act proves that they see digital assets as a legitimate threat to their lending models and deposit bases. If the act passes in its current form, it will validate the point of view that clear rules and usage-based rewards are the true catalysts for the next phase of growth.

For now, the market is betting that the senators will hold their ground against the banking lobby. If they succeed, the shift of capital from Bitcoin into select altcoins with strong narratives will likely continue. If they fail, the industry may have to wait much longer for the clarity it needs to fully integrate with the global financial system and move away from its offshore roots.

The clash between the crypto market and the banking sector is reaching a boiling point. This is healthy for the end user, as it drives innovation and offers more choices about where and how to hold value. The coming weeks will reveal whether the legislative process can withstand the pressure from established interests or yield to the status quo. If the current momentum holds, we are witnessing the birth of a new era in digital finance.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The truth behind the CLARITY Act lobby blitz: Crypto to the moon or banks compromise appeared first on e27.

Posted on Leave a comment

Amplicity raises US$1M to turn idle backup batteries into profit engines

L-R: Amplicity co-founders Gabriel Schiano (CTO) and Stéphane Leyo (CEO)

Singapore-based Amplicity has secured US$1 million in a seed investment round from investors, including ENGIE, to commercialise a simple but increasingly compelling idea: the batteries sitting inside data centres and industrial facilities should not be treated as expensive ornaments waiting for a blackout.

The startup builds a control layer that allows sites to use existing or planned battery systems, including UPS infrastructure, to cut electricity costs and earn revenue from energy markets without undermining backup readiness. The timing is perfect as operators across Asia Pacific are currently being squeezed from several directions at once: power prices remain volatile, grids are under pressure, and large energy users are facing sharper scrutiny over Scope 2 emissions.

Also Read: The surprising economics of orbital data centres — and the real solution

According to Amplicity co-founders, most backup batteries sit idle for more than 99 per cent of the time. In a region racing to build more data centres and industrial capacity, that is a lot of underused capital.

“For years, backup energy systems like UPS have been treated as passive insurance: essential but unproductive,” CEO Stéphane Leyo said.

That framing is neat, but the bigger story is not about idle hardware. It is about whether Asia’s next wave of energy infrastructure will be built from scratch or sourced from existing assets.

A regional problem hiding in plain sight

Amplicity is targeting a pain point that is especially visible in Southeast Asia. The region’s electricity demand is still climbing, while its digital infrastructure footprint is expanding fast. Singapore remains one of Asia’s most important data centre hubs even under tighter efficiency rules, while nearby Johor and Batam are benefiting from spillover demand. Indonesia is building out its own data centre and industrial estate capacity.

Australia, meanwhile, has become one of the world’s most active markets for battery economics, thanks to its volatile wholesale power market and mature ancillary services opportunities.

In all these markets, resilience matters. Data centres, semiconductor plants, logistics facilities and large industrial sites cannot afford downtime. That means backup batteries are already widespread. The problem is that they are usually sized for emergencies, then left untouched except for periodic testing.

From an engineering perspective, that has long made sense. From an economic perspective, however, it increasingly looks wasteful.

That is the opening Amplicity wants to exploit. Its software sits on top of those battery assets. It aims to do two things at once:

  1. Shave costly on-site demand peaks
  2. Where market rules allow, dispatch battery capacity into energy or grid-service markets to generate recurring income.

The addressable opportunity is not small. Asia Pacific is one of the fastest-growing regions for both stationary storage and data centre construction. The data centre UPS market alone is already worth billions of US dollars globally, with Asia accounting for a meaningful share. Add commercial and industrial battery systems, and the battery hardware footprint that could theoretically be optimised runs into the many billions. The software, services, and revenue-sharing layer built on top of that is easily a large regional opportunity in its own right.

Also Read: The AI server boom in Southeast Asia: Why data centres are running out of power

In Southeast Asia specifically, the total addressable market (TAM) is less about selling more batteries than about monetising batteries that are already being installed for resilience or compliance reasons. That makes the sales motion more attractive in a capital-constrained environment.

Why Singapore and Australia matter

Amplicity’s initial focus on Singapore and Australia is not accidental.
Its home market Singapore offers a dense concentration of exactly the kind of customer the company wants: energy-intensive, uptime-obsessed operators under pressure to improve efficiency and decarbonise. Data centres in the city-state face land constraints, regulatory scrutiny and high expectations around energy performance. If Amplicity can prove that UPS systems can be run as economic assets without compromising mission-critical operations, Singapore becomes a strong reference market.

Australia is different, but arguably even more lucrative in the short term. Its electricity market is far more dynamic, with greater price swings and a deeper set of opportunities for batteries to earn money through arbitrage and grid services. A battery that is economically attractive in Singapore can become materially more valuable in Australia if it is exposed to the right market signals. For a startup trying to show hard returns, this is crucial.

Together, the two markets provide a useful test bed: Singapore for operational credibility with demanding customers, Australia for energy-market monetisation.

ENGIE’s upside goes beyond venture optics

ENGIE’s continued presence on Amplicity’s cap table is also strategically important. For the French energy giant, backing a company like Amplicity is a way to deepen its position in distributed energy, behind-the-meter optimisation and customer-facing decarbonisation services.

ENGIE already operates across energy supply, services and infrastructure. A company like Amplicity gives it another lever: the ability to unlock flexibility from customer-owned battery fleets without having to fund or own all the underlying hardware. If those batteries can be orchestrated safely at scale, ENGIE benefits from a stronger customer proposition, new service revenues and potentially more flexibility to support energy trading or retail operations where regulations permit.

In plain English, Amplicity gives ENGIE a software-led route to value that would otherwise remain trapped in backup systems.

Not a white space market

Amplicity is not entering an empty field. Globally, energy storage optimisation and distributed energy management are already crowded categories. Fluence, Stem, Wärtsilä, Schneider Electric, Eaton, ABB and Vertiv all operate somewhere along the spectrum of battery control, microgrid management, site energy optimisation or resilience infrastructure. Some of them are enormous. Schneider Electric, ABB and Eaton are industrial heavyweights with global reach, while Fluence has built a large listed energy storage platform. Stem became one of the better-known software-led storage players in the United States, even if that segment has had a bruising few years.

In Asia and Australia, the picture is similarly active. Utilities, aggregators, and energy service providers already monetise batteries through virtual power plants, demand response programmes, and ancillary services markets. What makes Amplicity slightly different is the narrowness of its wedge. Rather than leading with new battery deployments, it focuses on extracting value from backup and UPS assets customers already have or were going to buy anyway.

That distinction matters because mission-critical operators are often willing to consider software layers and performance-based commercial models long before they are willing to rip out their energy architecture.

The decarbonisation case is real, but not automatic

Amplicity also pitches a climate angle, and this deserves a more sober reading than startup boilerplate usually gets.

Batteries do not reduce emissions by default. If they charge from a fossil-heavy grid at the wrong time and discharge later without displacing dirtier generation, the decarbonisation benefit can be limited. The value comes from how they are controlled.

Amplicity’s case is that smarter battery dispatch can reduce peak demand, shift consumption away from more carbon-intensive periods, help integrate more renewable power and reduce the need for peaking generation. For companies measured on Scope 2 emissions, that can translate into verifiable improvements, especially if battery operation is tied to auditable reporting. In data centres and industrial sites, where electricity demand is both large and visible, even modest efficiency and load-shifting gains can matter.

Also Read: The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis?

That is why this is more than a niche optimisation play. It sits at the intersection of energy cost management, grid flexibility and corporate decarbonisation.

US$1 million is a modest round by clean-tech standards, and Amplicity still has to prove that site operators will trust a young company with assets designed for worst-case scenarios. But the thesis is hard to dismiss. Asia is adding more batteries, not fewer. The grid is becoming more complex, not less. And businesses are less willing than ever to leave expensive infrastructure idle just because that used to be standard practice.

For Amplicity, the bet is that the next big energy asset in the region is not a shiny new battery farm. It is the one already sitting in the basement, waiting for somebody to give it a job.

The post Amplicity raises US$1M to turn idle backup batteries into profit engines appeared first on e27.

Posted on Leave a comment

Singapore’s AI tools are ready. Its workforce isn’t

Singapore’s businesses have largely figured out how to buy AI tools. What they have not figured out is what to do with the humans sitting next to them.

That is the central finding of a sweeping new report by Accenture, released Monday, which warns that Singapore’s AI ambitions risk stalling not because of a tech gap, but a people one. Titled Singapore’s Growth Mandate: Why the AI future will be won or lost on people, not technology, the report draws on four research streams conducted between December 2025 and February 2026, and paints a picture of a nation caught between digital momentum and human inertia.

The headline numbers are, on the surface, encouraging. Nine in 10 Singapore enterprises have moved beyond merely exploring AI tools into active implementation. Nearly half have deployed generative AI within specific business units, and nearly three quarters are experimenting with or exploring agentic AI.

But peel back the tech layer, and the numbers grow uncomfortable. Only one in three organisations has a talent strategy that is fully aligned with its AI strategy. Nearly half of tech leaders surveyed admitted their companies had yet to redesign job roles or responsibilities at all.

Also Read: Amplicity raises US$1M to turn idle backup batteries into profit engines

The cost of this misalignment is quantifiable. Organisations that placed people at the centre of their AI transformation in 2025 grew revenue 1.8 percentage points higher and profits 1.4 percentage points higher than peers that did not. In a market as competitive as Singapore’s, that is not a rounding error.

Young workers, old assumptions

Perhaps the sharpest finding concerns the country’s entry-level workforce, a cohort that is ambitious, digitally native and, according to the research, being quietly set up to fail.

Entry-level job postings rebounded eight per cent in 2025, suggesting the labour market is upgrading rather than collapsing. But what those roles demand has shifted dramatically. Postings for entry-level ICT positions fell 38 per cent between 2022 and 2025, while demand for AI, machine learning, and data management skills accelerated sharply. Routine, repeatable tasks are being compressed. Roles that combine domain knowledge, analytical reasoning, and the ability to deploy AI tools are expanding.

Young Singaporeans sense the shift. Fully 95 per cent believe Singapore’s ambition to lead in AI is achievable. Yet just 31 per cent strongly agree that the ambition is genuinely people-centric. Their anxiety is specific: 81 per cent report beginner-level or zero understanding of prompt engineering — the skill most commonly cited as a gap — and 80 per cent report the same when it comes to AI ethics and governance. Nearly half worry about keeping pace with the speed of AI change.

The report reserves its most striking finding for last. Only 23 per cent of Singaporean employees genuinely trust their employer to act in their best interest when introducing AI tools, a figure that sits in stark contrast to a global benchmark of 83% from Accenture’s separate Pulse of Change research.

Also Read: Featherless.ai wants to make AI model switching as easy as streaming Netflix

That gap is not merely a morale problem. Trust, the report argues, is a hard operational requirement. When employees do not believe their organisations will invest in them through an AI transition, they disengage from upskilling. And 47 per cent of respondents already identify a lack of leadership support as the single biggest barrier to building AI fluency effectively.

Accenture frames the challenge as a leadership imperative, not a human resources task. Mark Tham, Accenture’s Country Managing Director for Singapore, said business leaders must elevate talent strategies to an equal — or greater — priority than technology adoption, noting the country has largely mastered deploying AI tools but has yet to grapple seriously with redesigning the work around them.

Prime Minister Lawrence Wong’s Budget 2026 pledge of “no jobless growth” in the AI era gives the report a pointed political context. Accenture’s conclusion is blunt: Singapore’s CEOs are, in effect, the implementation layer of that national mandate. The algorithms are ready. The question is whether the organisations built around them are.

Image Credit: Annie Spratt on Unsplash

The post Singapore’s AI tools are ready. Its workforce isn’t appeared first on e27.

Posted on Leave a comment

Your AI strategy isn’t broken, your leadership structure is

I’ve sat in enough AI strategy meetings to know what the real problem is. It’s not the technology.

Your AI strategy isn’t broken. Your leadership structure is. 95 per cent of AI pilots fail to deliver ROI. The problem isn’t the model — it’s who owns the decision, and whether they can explain it.

The models work. They genuinely do. I’ve watched teams demo AI systems that are genuinely impressive — fast, accurate, commercially relevant. And then I’ve watched those same teams, six months later, quietly shelve the project because no one could answer a simple board question: who’s accountable when this goes wrong?

That question kills more AI initiatives than bad data ever will.

The 2026 AI & Data Leadership Benchmark puts a number on it: 95 per cent of AI pilots fail to produce measurable business value. Read that again. Not 30 per cent. Not 50 per cent. Ninety-five. And in almost every postmortem I’ve seen, the failure wasn’t technical. It was structural — absent accountability, unexplainable decisions, and governance that arrived about eighteen months too late.

Here are the five places where that structure breaks — and what it actually takes to fix it.

You can’t explain the decision. That’s your problem now.

There’s a version of this that sounds abstract until it happens to you. Your AI system made a decision — a pricing call, a credit rejection, a hiring shortlist — and now someone is asking you to explain it. Not the engineering team. You.

The National CIO Review found that 90 per cent of CIOs say their professional reputation now directly depends on AI outcomes. And 85 per cent say missing traceability has already killed or stalled projects they were responsible for.

This is the trap nobody talks about clearly enough: when you adopt AI for its speed, you inherit accountability for outcomes you may not be able to reconstruct. The model moved fast. The audit trail didn’t.

What actually helps:

Build the decision trail before the first production deployment — not after the first crisis. Name a human owner for every AI system that touches pricing, hiring, credit, or customer outcomes. Make sure that the person can explain the decision logic in plain language to a board that didn’t ask for a technical briefing.

Your autonomous agent is making decisions. Does anyone know in what order?

Agentic AI is the shift that snuck up on a lot of leaders. We went from “AI that recommends” to “AI that acts” faster than most governance frameworks could follow. These systems now schedule meetings, draft contracts, initiate vendor communications, and escalate purchase orders — all without a human in the loop.

Forrester estimates 60 per cent of Fortune 100 firms will appoint a dedicated Head of AI Governance by the end of 2026, specifically because of agentic risk. That’s not a trend. That’s a fire alarm.

Also Read: Burning billions: AI’s capital frenzy and its global implications

The question I ask every leadership team deploying agentic AI is simple: What’s the trigger? At what point does the agent pause and ask a human? Most teams don’t have a clean answer. That gap is where the expensive mistakes happen.

What actually helps:

Before you go live, document the categories of action that require human confirmation. Build an immutable log of what the agent does. Test the rollback. The escalation protocol isn’t a nice-to-have — it’s the only thing standing between you and a consequential autonomous decision you can’t reverse.

When everyone owns AI, a board can smell it immediately.

I’ve heard this framing in more organisations than I can count: “AI is a shared responsibility across the business.” It sounds collaborative. It is, in practice, a way of ensuring no one is actually responsible.

Only 38 per cent of companies have a unified AI leadership role. Of the organisations consistently reporting strong AI ROI, nearly all have one thing in common: a single person accountable for AI outcomes — not AI tools, not AI infrastructure, AI outcomes — with a direct line to the CEO.

JPMorgan put its AI executive on a 14-person operating committee reporting directly to Jamie Dimon. That’s not a benchmark to aspire to. That’s the baseline for what serious AI governance looks like in 2026.

What actually helps:

Stop distributing accountability as if it were a feature. Appoint one person. Give them the CEO reporting line. The internal fight over whether this role belongs in technology or in business is worth having — because whoever wins that fight is setting the strategic agenda for the next decade.

AI is answering faster than you’re thinking. That’s a design flaw, not a feature.

UNSW Business School research tracks over 150 cognitive biases that affect human decision-making. The one that concerns me most in an AI context is anchoring — the way we over-weight the first recommendation we encounter. When AI surfaces a recommendation in milliseconds, it anchors your thinking before your own independent analysis has even started.

McKinsey puts it precisely: AI is most valuable when it augments human judgment. But its speed creates structural conditions that suppress the formation of independent judgment in the first place. I’ve watched executives nodding along to AI recommendations they hadn’t actually interrogated — not because they were lazy, but because the workflow was designed to move fast and the AI output was already on the screen.

What actually helps:

Redesign the sequence. Human analysis first. AI recommendation second. For any decision that’s high-stakes and hard to reverse, make the independent view a governance requirement — not a personal discipline that gets skipped when the calendar fills up.

Also Read: GenAI adoption is rising in Asia, but ROI remains elusive: Adobe

Governance built early is a moat. Governance built after a failure is just damage control.

The framing I hear most often is wrong: that governance slows AI down, that it creates friction, that it’s something you layer on once you’ve proven the use case. Every organisation I’ve seen operate from that assumption has eventually paid the price — in stalled deployments, board confidence erosion, or a regulatory intervention that arrived without warning.

The organisations generating genuine AI ROI in 2026 built governance early — before regulation demanded it, before a failure event required it. They report faster deployment cycles now, not slower. Because the board trust was already there. The audit trail was already there. When regulatory scrutiny arrived, it was a conversation, not a crisis.

What actually helps:

Take the governance investment to your board as a compounding asset with a payback period — not a compliance cost. The EU AI Act and its equivalents are converting what was a differentiator into a compliance floor. Build now, and you’re ahead. Build when you’re forced to, and you’re just keeping up.

Also Read: Singapore’s AI adoption surges, but data complexity raises security risks: Report

The position worth taking

If I had to distil everything above into one claim worth staking a career on, it’s this:

AI is a governance problem disguised as a technology problem. And the leaders who solve governance first are the ones who will still be standing when the dust settles.

That’s not a framework from a consulting deck. It’s what the evidence actually shows — across MIT Sloan, McKinsey, the National CIO Review, Harvard, and IBM — when you read it without the vendor framing.

The five intersections above aren’t abstract. They’re the decisions sitting on your desk right now. How you treat them — as technology problems or as leadership problems — will define what your AI story looks like in two years’ time.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Your AI strategy isn’t broken, your leadership structure is appeared first on e27.

Posted on Leave a comment

The hidden cost of AI coding: Why proof will matter more than prompts

AI coding tools turned software output into a speed story. A developer can sketch a product in the morning and push a working build before dinner. That is why vibe coding spread so fast, even as security researchers warned that AI-generated code can widen software supply chain risk.

The part most people missed sits behind the prompt box. In many AI coding stacks, code, prompts, and usage data can pass through outside platforms, cloud infrastructure, or model-provider systems. 

For a startup hacking on a landing page, that may feel tolerable. For a bank, fintech, or fund, it can open a path to IP loss, audit trouble, and valuation damage.

An alarm bell from his own workflow

The concern started from a personal place. I had been using AI coding tools on quant trading systems, then realised the privacy settings behind those tools deserved a much closer look. This is my life’s work. How am I supposed to feel about this?

One example of that concern reflected in policy is Cursor’s data-use page. It says that if Privacy Mode is turned off, it may use and store codebase data, prompts, editor actions, code snippets, and other code data to improve features and train models. Requests still pass through its backend, even when a user brings an API key of their own.

The rules also change depending on which product is in the chain. OpenAI states it doesn’t train on business data by default, and Anthropic shares the same for its commercial products. Consumer products and third-party access follow separate terms, which leaves enterprises sorting through a patchwork of settings, vendors, and responsibilities.

Also Read: Can you build an app without coding? My experiment might surprise you

Why this hits finance harder

A code leak is not just a developer problem in regulated sectors. A financial codebase can hold client identifiers, internal controls, pricing logic, fraud rules, risk models, and trading strategies. Put differently, source code carries business logic, internal workflows, architecture decisions, and years of engineering work. Once it leaves a company’s control, the damage can spill into customer trust, due diligence, compliance, and enterprise value.

90 per cent of security professionals say insider attacks are as hard as or harder than external ones to detect; 72 per cent of organisations still cannot see how users interact with sensitive data across endpoints, cloud apps, and GenAI platforms.

And that pressure is meeting a tougher legal climate. 2025 marked the move from AI hype to AI accountability, with regulators in the U.S. and EU shifting toward enforcement and compliance deadlines. In Europe, the Digital Operational Resilience Act makes clear that financial entities remain fully responsible for their obligations, including when ICT services are outsourced.

Visibility is also getting worse as AI systems touch more of the workflow. Only 21 per cent of organisations maintain a fully up-to-date inventory of agents, tools, and connections, leaving 79 per cent operating with blind spots. Nearly 40 per cent of enterprise AI interactions now involve sensitive data, including copied text, pasted content, and file uploads.

What’s the pitch to non-technical executives

Let’s frame the risk in business terms. Using AI means that you are sending data to whoever is providing the model, or the platform, and potentially also whoever is providing the infrastructure. 

The big question for executives, in my view, is whether they are comfortable with that chain seeing, storing, or learning from their most valuable data.

Also Read: From chatbots to vibe-coding: 3 AI experiments that changed my investment strategy

The answer changes fast here. Financial and regulated firms can’t afford the ‘move fast and break things’ approach that many AI tools implicitly encourage. More often now, regulators, buyers, and internal security teams want a clear record of where data went, who touched it, and what evidence exists afterwards.

The next premium in AI: Controlled execution?

The market has already rewarded speed. The next premium may go to platforms that keep the speed and clear security review, also giving compliance teams evidence they can stand behind. That is a finance story as much as a tech one, because budgets, contracts, due diligence, and enterprise value tend to follow tools that reduce uncertainty instead of adding another black box.

AI can clearly write code. But where does that code travel? Who can inspect the path? What proof is left behind when the work is done? Those are the sharper questions for boards, CFOs, CISOs, and investors.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The hidden cost of AI coding: Why proof will matter more than prompts appeared first on e27.

Posted on Leave a comment

Cybersecurity strategies for startups on a budget

Digital evolution worldwide has been rapid over the past few decades. Startups are increasingly transitioning from local to regional and even global market presence, underscoring the opportunities that digitisation at scale has brought. This development has made cybersecurity a key pillar of effective business governance in the modern age. 

Today, having a robust cybersecurity ecosystem ensures that startups preserve stakeholder trust. Thankfully, small businesses no longer require a large capital investment to build a defensible and modern security posture. With a strategic approach, high impact and low cost can exist simultaneously.

Shifting from a reactive mindset to a proactive one

The cybersecurity industry has largely shifted from focusing solely on prevention toward building resilient and proactive models. Emphasis on detection and recovery has become an important measure of a business’s longevity. Adopting this philosophy is critical for emerging businesses, which are often viewed as particularly vulnerable. 

For startups, effectively protecting sensitive customer information can be the difference between long-term growth and reputational damage. Entrepreneurs who focus on security early on find it easier to navigate regulatory requirements and secure partnerships with large organizations. Fortunately, building resilient systems is more about continuous education and operational improvement than it is about heavy capital expenditure. 

Strengthening identity and access control

Identity is the primary focus area for building modern cybersecurity systems. With more businesses migrating to cloud-based ecosystems, user account management is now the most important line of defense. Implementing Multi-Factor Authentication (MFA) is the most effective and low-cost approach available. By adopting a second form of verification, organizations can prevent approximately 99 per cent of account takeover attacks. 

Having centralized password management is also essential. Making employees remember complex passwords leaves room for reuse across personal and professional platforms. Tools such as Bitwarden or Keeper help startups create original and complex passwords, preventing mix-ups. This ensures that a breach on third-party platforms does not lead to deeper internal entry. Such subscriptions are inexpensive for the depth of protection they provide. 

Also Read: AI vs AI: Inside Southeast Asia’s new cybersecurity war

Managing the hybrid work perimeter

Flexible work arrangements, such as hybrid or fully remote models, are becoming a defining feature across industries worldwide. Many startups leverage remote talent to stay competitive, but this decentralised model introduces risks as employees access data from unsecured home networks. In fact, of those working from home, 25 per cent of employees are unaware of their device’s security protocols. Startups must rethink data protection outside the office.

Organisations should implement Virtual Private Networks (VPNs) and cloud-based security layers to protect data outside the office. As cyberresilience becomes a higher priority in remote work environments, defining clear remote work policies and educating employees about the risks of unsecured public Wi-Fi are critical, and fortunately, low-cost.

Continuous digital hygiene and automated patching

In 2026, the speed of digital attacks increased, often aided by automated tools that scan for known vulnerabilities. Keeping all software and applications up to date is a nonnegotiable task. Many regional incidents occur because a business delayed a critical update to avoid a minor disruption, only to leave a vulnerability exposed to opportunistic attackers.

Also Read: How cybersecurity companies can build trust through digital PR

Automated patch management is a cost-effective way to mitigate disruptions caused by outdated software. Most modern platforms offer auto-update features that require minimal configuration. For startups managing cloud infrastructure, using managed services that handle security updates can offload significant technical risk. Maintaining a high standard of digital hygiene ensures the company is not “low-hanging fruit” for the scripts and ransomware variants currently affecting small and medium-sized enterprises.

Leveraging frameworks and local compliance

Founders do not need to build security policies from scratch. Numerous free frameworks provide a roadmap for improving security. The NIST Cybersecurity Framework is a globally respected standard, but regional alternatives provide specific guidance. For example, business owners and IT teams in Singapore should seek the government-created Cyber Essentials Mark to align with region-specific standards. 

Also Read: Code, power, and chaos: The geopolitics of cybersecurity

Adhering to these frameworks also helps with data sovereignty. As countries across Southeast Asia strengthen data governance and protection practices, businesses in the region must demonstrate a baseline of security to remain compliant and avoid fines. Compliance is also a competitive advantage — it signals to enterprise clients and investors that the startup is a mature, responsible partner.

The 3-2-1 backup and recovery strategy

No security system is impenetrable, making a robust backup strategy the ultimate safety net. The “3-2-1” rule remains the industry standard — at least three copies of data on two different media, with one copy kept off-site. This ensures that even during a ransomware attack or hardware failure, the business can be restored without paying a ransom.

Regularly testing the recovery process is as important as the backup itself. Many organisations realise too late that their backups were corrupted or that recovery is too slow. Performing a “fire drill” once or twice a year ensures the team knows how to get the business back online within hours. Preparedness is often the difference between a minor incident and a terminal business failure.

Fostering a culture of security and resilience

Ultimately, technical tools are only as effective as the people using them. Building a culture where every team member feels responsible for security is the most cost-effective long-term strategy. By educating their employees about the key strategies and frameworks for modern cybersecurity, startups can ensure company-wide safety without costing a fortune.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Cybersecurity strategies for startups on a budget appeared first on e27.

Posted on Leave a comment

How SaaS companies are valued: Why the multiple is only the surface story?

One of the most persistent myths in tech is that SaaS valuation is a simple formula. Take Annual Run Rate (“ARR”), apply a market multiple, and you have your answer.

It is a useful shortcut. It is also how founders end up misunderstanding what their company is actually worth.

Yes, SaaS businesses are often discussed in terms of ARR multiples. But in real transactions, especially exits, the multiple is not the valuation logic. It is the output of it. What buyers are really valuing is the quality of the revenue, the durability of growth, the efficiency of the model, and the type of transaction being done.

That distinction matters because two SaaS companies with the same ARR can produce very different outcomes in the market.

The first point is straightforward: recurring revenue matters more than revenue in general. For most SaaS businesses, valuation is anchored on ARR, not total revenue. That is because recurring subscription revenue is the part that a buyer can actually underwrite with some confidence. It is predictable, repeatable, and, if the business is healthy, compounding.

By contrast, implementation fees, consulting income, or one-off project work may still be commercially useful, but they rarely deserve the same multiple. A company with US$10 million in total revenue, of which US$8 million is recurring, should not expect to be valued the same way as a company with US$10 million in which half the revenue comes from non-recurring services. The first looks like a software asset. The second may still be a good business, but it is not as clean a recurring one.

But even that is only the starting point.

What really separates SaaS businesses in valuation is not just the amount of ARR, but the quality of that ARR. And the clearest signal of quality is retention.

This is where many founders become overly optimistic. They see recurring billing and assume the market will view their revenue as durable. Buyers do not think that way. They look at churn first. If customers are leaving too quickly, the business is not truly compounding. It is just running hard to replace what is already falling out of the bottom.

Also Read: The autonomous agent paradigm: Meta’s Manus acquisition, MCP integration, and the disruption of SaaS

As a practical benchmark, SMB SaaS volume churn should generally not be more than three per cent per month. Enterprise SaaS should be far tighter, ideally with near-zero volume churn across core accounts. The exact number is not the whole point. The principle is: Retention is a proxy for stickiness, product relevance, and how deeply the software is embedded in customer workflows.

In plain English, buyers pay more for revenue that stays.

That also means an average-growth company with excellent retention can be worth more than a faster-growing business with weak customer durability. Founders often overemphasise growth and underappreciate the penalty the market places on churn. But a leaky SaaS business is not a premium SaaS business, no matter how strong the top-line story sounds in a deck.

Growth still matters, of course. A company growing more than 300 per cent year-on-year will usually attract more attention than one growing at 50 per cent. Faster growth often supports a higher multiple because it suggests a bigger future revenue base and a stronger competitive position.

But growth is not one thing. Buyers care about growth quality.

Was growth driven by healthy demand and repeatable customer acquisition, or by unsustainably high sales and marketing spend? Was it supported by strong expansion within existing accounts, or did it depend on heavy discounting just to win new logos? Is the growth durable, or did the company simply pull revenue forward?

These are not academic questions. They directly shape valuation. High growth with poor retention and weak economics is less impressive than founders like to think. High growth with strong retention and efficient acquisition is where the real premium sits.

This leads to another factor that founders consistently underestimate: margins and unit economics.

Software is attractive because it should scale. That does not mean every SaaS company automatically deserves a strong valuation. Buyers will still look closely at gross margins, customer acquisition cost, payback periods, and overall operating leverage. If the business needs too much spending to maintain growth, or if margins remain thin despite scale, the valuation logic weakens. A recurring revenue business with poor unit economics is not a great asset just because it invoices monthly.

Also Read: The agent swarm is unleashed on SaaS

So when people ask how SaaS companies are valued, the better answer is this: not by ARR alone, but by the quality of the machine producing that ARR.

That machine is judged across four big dimensions.

  • First, how much revenue is truly recurring.
  • Second, how sticky that revenue is.
  • Third, how durable and efficient the growth is.
  • Fourth, whether the economics prove the business can scale.

Only after that does the multiple make sense.

Where this becomes more interesting is when founders confuse fundraising valuation with exit valuation. The two are related, but they are not the same exercise.

In a VC fundraising round, the valuation often reflects future potential more than present-day operating quality. Investors may be willing to pay up because they believe the company could become a category winner, dominate a large market, or grow into a strategically important platform. The valuation is often shaped by what the company might become.

In an exit, especially in M&A, the lens is more grounded. Buyers are usually paying for what exists today, adjusted for what they believe they can realistically achieve after closing. That makes M&A valuation more closely linked to current performance, risk, and transaction logic.

Also Read: The rise of one-person AI companies and why micro-SaaS is at the centre of it

Put differently, fundraising tends to reward possibility. Exits tend to reward evidence.

This is why founders should be careful when using private fundraising rounds as reference points for what their company should be worth in a sale process. A VC may tolerate messy retention, thin margins, or heavy burn if the upside is large enough. An acquirer, particularly one writing a real cheque to buy control, will usually be more disciplined.

Even inside M&A, not all buyers think alike.

A strategic acquirer may pay more because your product fills a capability gap, gives them access to a highly relevant customer base, or creates synergies across product, distribution, or go-to-market. They are not only buying your standalone cash flow. They may also be buying what your company unlocks inside their broader machine.

A financial buyer, by contrast, is usually more disciplined on headline multiple. They will focus more tightly on retention, margins, cash flow profile, and whether the growth engine is efficient enough to support an investment case. That does not mean they always pay less. It means their logic is usually more rooted in the business as an asset, rather than in strategic overlap or synergy.

So the same SaaS company can produce very different valuations depending on whether the buyer is strategic or financial.

And then there is deal structure, which founders often ignore until it is too late.

A headline valuation is not the same as bankable value. If a buyer offers a rich number, but much of the consideration comes in shares rather than cash, the economics become much less certain. A share swap may look attractive on paper, especially if the acquirer is growing quickly or trades well publicly. But it also means the seller is taking future performance risk, liquidity risk, and market risk on the buyer.

An all-cash offer at a slightly lower headline valuation may, in practice, be worth more because the proceeds are real, immediate, and certain. The same logic applies to earn-outs, deferred payments, and other structured consideration. Founders should not just ask what the price is. They should ask what form the price takes, when it is paid, and what has to happen before it becomes real.

This is why transaction context matters so much. Market benchmarks can tell you where comparable businesses may sit. But actual outcomes depend on buyer fit, competitive tension, and structure. A strong strategic fit with multiple interested buyers can move valuation above generic benchmarks. A single-bid process with messy diligence and weak retention can drag it below them very quickly.

Also Read: I built an AI agent for myself — it became a 2,000-user micro-SaaS

The uncomfortable truth is that SaaS valuation is less about memorising what multiple the market is paying and more about understanding why one business deserves that multiple while another does not.

Founders who want to improve valuation should stop asking only, “What are SaaS companies trading at?” and start asking better questions.

  • How much of my revenue is truly recurring?
  • How strong is retention by segment and cohort?
  • Is our growth efficient, or just expensive?
  • Do our margins support the software story?
  • Would a buyer see this as a durable asset, or as a promising but risky one?
  • And if I do get an offer, how much of it is actually cash?

That is the real lens.

The market may speak in multiples. But deals are done on quality, confidence, and structure. Founders who understand that early will prepare differently and, usually, negotiate better.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post How SaaS companies are valued: Why the multiple is only the surface story? appeared first on e27.

Posted on Leave a comment

Air gapped open source and the secure but stale paradox

There is a familiar comfort in industrial environments that still keeps critical systems isolated from the outside world. The argument sounds sensible. If the plant is air gapped, exposure is lower. If exposure is lower, updates can wait. If updates can wait, stability wins. That logic has carried many sites for years, but it is becoming harder to defend as open source components sit deeper inside historians, engineering workstations, remote access stacks, vendor appliances, monitoring tools, and the layers around control.

In operational technology, it is clear that outages are often unacceptable, must be planned days or weeks in advance, software changes must be thoroughly tested, and deployed technology often remains in service for 10 to 15 years or longer. It is also important to note that OT frequently relies on older operating systems that may no longer be supported. That is the environment in which the paradox appears. The safest plant is not always the one that updates most often, but it is also not the one that quietly ages into unmanageable software risk.

That is why the phrase secure but stale matters. In plants, stale software is rarely the result of negligence alone. It is often the result of rational operational discipline. The trouble is that rational local decisions can create strategic drift. A component that was acceptable when commissioned can become difficult to patch, harder to support, and poorly understood by the people still operating it years later. This is not a niche problem. It is part of the structural difference between IT and OT.

The wrong objective is patch speed

Many security discussions still assume that the right answer is to push plants closer to enterprise patching cycles. That is usually the wrong lesson. In industrial settings, speed without operability becomes its own risk. Software updates in OT cannot always be implemented on a timely basis, need vendor and end user testing, and may require revalidation with control engineers, security teams, and IT working together. If leaders ignore that and set patch velocity as the headline metric, they will force either unsafe change or quiet non-compliance. Neither outcome is mature.

A better objective is controlled freshness. By that I mean something more realistic than always current and more responsible than indefinitely deferred. Controlled freshness means every open source component has a known origin, a known owner, a known operational purpose, and a known path to replacement or containment. That is a more serious standard for plants because it respects the reality of shutdown windows while refusing blind trust as a long-term operating model.

Also Read: How to navigate the investment opportunity in climate tech sector

Many software supply chain guidance points in exactly this direction. It treats SBOM, vendor risk assessment, open source controls, and vulnerability management as complementary capabilities, not substitutes, and it stresses that open source provenance, integrity, support, and maintenance are often not well understood or easy to discover.

Open source is not the problem, unmanaged open source is

There is no value in pretending plants can avoid open source. They already depend on it, often indirectly. The real issue is that many sites do not know precisely where it sits, which versions are deployed, or whether a vendor appliance that looks closed is in fact carrying a stack of ageing open components underneath.

Organisations should understand suppliers’ use of open source components, acquire those components through secure channels from trustworthy repositories, maintain sanctioned internal repositories, and use hardened internal repositories or sandboxes before introducing components into development environments. It also says that when no vendor-supplied SBOM exists, organisations should perform binary decomposition to generate SBOMs for legacy software where technically and legally feasible.

That changes the leadership question. The issue is no longer whether a plant uses open source. The issue is whether the organisation has operationally useful visibility into that open source estate. In practice, that means knowing which components matter enough to affect production, safety, recovery, vendor support, or incident response. Perfect visibility can wait. Actionable visibility cannot.

The real control layer is the offline intake model

Air-gapped environments need a better software intake discipline than most enterprises because they cannot rely on frequent corrections later. The strongest plants do not treat updates as downloads. They treat them as engineered releases.

The Secure Software Development Framework is helpful here because it is not written only for fast-moving cloud products. It recommends release integrity verification, including cryptographic hashes and code signing, and it says organisations should securely archive each software release together with integrity verification information and provenance data.

It also calls for provenance data to be maintained and updated whenever software components change, and for policies to cover the full life cycle, including notifying users of the impending end of support and end of life. It further recommends maintaining older versions until transitions from those versions have been completed successfully. In a plant context, that is not administrative overhead. It is the basis for being able to trust an offline release years after it was first imported.

Also Read: What big tech won’t show you about the future of AI

This is where many industrial organisations still fall short. They have changed control for plant operations, but not a proper intake pipeline for software artefacts entering the isolated estate. That gap matters. If a site cannot verify what was entered, what dependencies came with it, what integrity checks were performed, and which baseline it replaced, then the air gap is only reducing exposure. It is not creating a trustworthy software discipline.

Compensating controls matter more in OT than most security teams admit

There will always be software that cannot be updated on the timetable security teams would prefer. The mature response is not denial. It is containment.

In OT, it is recommended to do security controls such as antivirus and file integrity checking, where technically feasible, to prevent, deter, detect, and mitigate malware. Patches should be tested on a sandbox system before production deployment, and notes that bump in the wire devices can be installed inline with devices that cannot be updated or are using obsolete operating systems. This is important because it reframes the conversation. When patching is slow, the answer is not to pretend the exposure does not exist. The answer is to tighten the surrounding trust boundary, preserve evidence, and buy time safely.

Vendor discipline is part of plant discipline

A second mistake organisations make is to assume that air gapped constraints excuse weak supplier behaviour. They do not. In fact, they make supplier quality more important. Users should be able to understand which vulnerabilities a patch closes, that the status and applicability of patches should be documented, that asset owners should have a documented list of available and applicable patches, and that hardening should be retained after patching.

SBOM repositories should be digitally signed and accessible, and open source controls should include secure acquisition channels and component visibility. Buyers should prioritise configuration management, logging, data protection, secure by default design, vulnerability handling, and upgrade tooling when selecting OT products.

That combination points to a harder commercial stance. If a supplier cannot explain what open source is inside the product, how upgrades are packaged, how long components are supported, and how integrity is verified offline, then the product is not merely harder to manage. It is strategically expensive to own.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Air gapped open source and the secure but stale paradox appeared first on e27.

Posted on Leave a comment

If you have the will, you’ll have the skill

Most teams are still using AI the same way: ask ChatGPT, get an answer, share it around, repeat. It’s a loop.

I did the same for over a year, but AI has become a lot more than that.

Early this year, something changed how I work, and how ourteam (an AI recruiting platform focused on automating candidate screening and evaluation) works, entirely.

I started my career 16 years ago in management consulting. My first task was creating PowerPoint slides. I remember drawing a line that was always bent. Then, my senior taught me a keyboard shortcut (holding the Shift button when dragging the mouse). That became my first ‘skill’ at work.

Today, that word rings very differently.

A ‘skill’ in AI is a text file (commonly known as a markdown file). It contains a set of plain-text instructions that tell the AI what to do consistently every time. It can be built in minutes. It can learn and repeat what takes months and years.

I grew up with the rise of the internet, mobile and cloud. I believe those were critical shifts to get us to where we are today. Yet, this feels different.

AI isn’t just a thinking partner anymore; it’s becoming the ‘system’ teams actually run on.

I have zero technical background, I can’t code, but lately I’ve been managing a team of AI agents that handle our daily work and build our software.

These days, ourteam and I are operate with AI agents daily. These agents now catch errors, review work, and ship updates on their own. The team just directs. If you told me this a year ago, I wouldn’t have believed you.

(An ‘AI agent’ is an AI-powered entity that can take actions on its own. It reads files, writes code, sends emails, and runs tests. You give it a goal, and it figures out the steps.)

Also Read: The rise of AI agents in healthcare: Designing man-machine systems

Which brings me to the point of intelligence.

In knowledge work, we tend to associate intelligence with execution: knowing how to draw that straight line on PowerPoint, making charts and models on Excel, designing prototypes, programming software, and more.

Those were skills we took months and years to learn. Today, you can create a skill in 5 minutes.

Most of the intelligence work today can be done faster, cheaper and often better by AI systems. If your use of AI is still just prompting ChatGPT back and forth, there’s actually a lot more out there.

AI models and applications have become so good that real, serious work can be done reliably and consistently. You give it an outcome, and it handles the steps.

That’s the shift most people are underestimating.

Now, the hardest part isn’t handing over execution to AI. It’s everything that comes before and after.

Judgement. Taste. Standards.

Knowing what’s good or bad. What feels right. What should be shipped and what shouldn’t. Those decisions are still on us.

Spending time to think and write, I believe, is one of the most underrated practices left.

I started as a tech enthusiast. Today, I’m a heavy Claude user (Claude-pilled as they say).

From simple chat to using it as a coworker. Now, I’m deep into Claude Code, building and shipping things through what people call “vibe coding”. (And yes, I cancelled my ChatGPT subscription, but that’s a separate story.)

The strange part is this: the more you learn, the more you work.

AI expands what’s possible, so you end up doing more. Anyone actively building with it will tell you the same.

Also Read: AI agents didn’t change how I write, they changed when I could start publishing

On X, there’s a fast-moving community debating AI models, workflows, and sharing best practices: Claude Code vs Codex, agent workflows, open-source tools like OpenClaw.

We used to wait excitedly for Apple Keynote once a year. Now, exciting breakthroughs happen every few days.

Inside ourteam, we’re constantly learning and applying new concepts: agent teams, auto-research, self-healing systems, internal LLM wikis, second brains. These aren’t just “new features”; they add up to a fundamentally different way of working.

We used to identify ourselves singularly: an engineer, a salesperson, a product manager, a customer service manager. What if today we can be all that and more? It is no longer an ‘if’, but ‘when’.

If you have the will, you’ll have the skill.

I was taught to unlearn and relearn. And here I am. It feels weird at times, but it’s super exciting.

Hopefully, this inspires you too.

This article was first published here.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post If you have the will, you’ll have the skill appeared first on e27.