Posted on Leave a comment

65labs, the grassroots AI community that won’t stop outgrowing its venue

There was no funding announcement. No government mandate. No corporate strategy deck. Just borrowed rooms, called in favours, and a recurring observation that Singapore’s AI builders had nowhere to collide.

That was the origin of 65labs, now Singapore’s largest grassroots AI builder community with more than 5,000 members. Since its founding, the community has outgrown every venue it has ever been given. This reveals that the demand for such a platform had always been there; what was missing was the occasion.

The gap was infrastructure,” says Sherry Jiang, co-founder of 65labs and CEO of AI finance startup Peek, in an email interview with e27. “The invisible machinery that Silicon Valley has built over decades — the third spaces, the casual collisions, the culture of showing up for each other without an agenda. Singapore didn’t have enough of it. So we started building some.”

What makes 65labs genuinely unusual is who is running it. Every co-founder holds a full-time role elsewhere. The community is built in the margins: between jobs, on weekends, after hours.

Interestingly, this is not seen as a limitation. Instead, Jiang argues that it is the point.

“Grassroots means you don’t wait for permission,” Jiang says. “No one handed us a mandate or a curriculum. We saw a gap and started filling it.”

Also Read: Konvy bags US$22M to bring more Japanese beauty brands into SEA

In practice, that philosophy shapes everything about how 65labs operates. There are no certifications, no headcounts reported upward, and no KPIs tied to a government grant. The community programmes from the bottom up, watching what questions its members are actually asking, which problems they are stuck on, and building events around that signal.

The people who turn up reflect that approach. Agrim Singh, a co-founder of the 65Labs community who is also the CTO and co-founder of Niyam AI, describes the range as genuinely surprising: NUS and NTU students at their first technical event sitting next to engineers with three decades of experience; a father who arrived at a 24-hour hackathon with his two teenage children and competed alongside seasoned founders; mid-career professionals from finance, law, and healthcare using 65labs as an on-ramp for reorienting their careers around AI.

“What they have in common is that they’re not here to talk about AI,” Singh says. “They’re here to build with it.”

No panels, no speculation

65labs made a deliberate early decision about which events it would and would not run. No panels where people speculate about AI’s future or keynotes from people who are not close to the work. If you take the stage at a 65labs event, you are showing something you actually built: what worked, what broke, what you would do differently.

“That standard filters the room naturally,” Singh says. “People who come to be seen at an AI event stop coming after the first one. People who come to learn and build keep coming back.”

The results of that filter have been visible enough to attract serious outside attention. OpenAI chose 65labs to run its first official Codex hackathon in Singapore. Cursor held its first Singapore event through the community. And now, 65labs is hosting AI Engineer World’s Fair — the world’s leading conference for AI engineers, backed by OpenAI, Google DeepMind, Cursor, Vercel and Z.ai — when it makes its first-ever Asia stop in Singapore from May 15 to 17.

Also Read: Hiring creatives in the AI age: Skills over titles

Singapore’s unique position

Jiang draws a sharp distinction between what 65labs is building and what top-down initiatives can offer. She invokes an unlikely historical parallel: the Homebrew Computer Club, the garage gathering founded in 1975, where a then-unknown engineer named Steve Wozniak first showed up and went home to begin designing what became the Apple I. The club never had more than a few hundred members, but its cultural legacy is incalculable.

“That’s what grassroots actually means,” she says. “The room exists because we built it. Everyone in it chose to be there.”

Singapore’s geographic and political position, she argues, makes 65labs something more than a local success story. Unlike San Francisco, which she describes as a cultural silo that exports ideas more readily than it imports them, Singapore is structurally wired to look both ways: East and West, emerging markets and developed markets, consumer and enterprise.

“The builders here are naturally exposed to product principles and engineering approaches that have worked across wildly different contexts,” she says. “Western companies are starting to recognise this. They’re not just coming to broadcast. They genuinely want to understand how different markets are solving problems they haven’t cracked yet. That curiosity is new.”

As 65labs scales, its co-founders are clear-eyed about the risks. Singh names integrity as the hardest thing to preserve, the slow erosion that comes from individually defensible decisions that accumulate into something unrecognisable.

“Every community that has lost its culture has lost it the same way,” he says. “We think about that a lot.”

The marker of success they keep returning to is deceptively simple: are the people who showed up before anyone was watching still in the room?

For now, they are. And the room, as ever, is full.

Image Credit: Nicholas Cheng (VideoPulse.io)

The post 65labs, the grassroots AI community that won’t stop outgrowing its venue appeared first on e27.

Posted on Leave a comment

Konvy bags US$22M to bring more Japanese beauty brands into SEA

Thailand-headquartered online beauty marketplace Konvy has closed a US$22 million Series B round led by Cool Japan Fund (CJF), signalling a sharper push to export its omnichannel playbook across Southeast Asia.

Existing backers, including Insignia Ventures Partners, also participated in the financing.

The capital injection comes at a pivotal moment: Konvy has already entrenched itself as a major force in the kingdom’s beauty and personal care market, and now it wants to turn that domestic strength into regional scale, with the Philippines and Malaysia first in line.

A proven domestic engine

Konvy’s core advantage is its reach across multiple channels. The company combines its own e-commerce site with a presence on leading marketplaces, social commerce activity and offline retail. That omnichannel footprint has allowed it to assemble a catalogue of more than 20,000 SKUs from over 1,000 brands and to secure a position as one of Thailand’s most influential beauty platforms.

Also Read: How technology can influence the beauty and cosmetics industry

That market leadership is not merely about assortment. Konvy has invested in the data and logistics plumbing that knit together transactional channels and customer touchpoints, which the company argues helps it turn product curation into repeat sales and stronger brand relationships.

“We have built a strong leadership position in Thailand, and we are now focused on scaling that success across Southeast Asia,” said Qinggui Huang, Group CEO of Konvy. “With CJF as our lead partner, we are uniquely positioned to bring high-quality Japanese brands to the region while continuing to grow our own portfolio of private label products.”

The quote is revealing for two reasons.

  • First, Konvy still pursues a hybrid strategy: it wants to be both a channel for third-party brands and a manufacturer of private-label goods.
  • Second, the deal with CJF is explicitly strategic aimed at positioning Japanese beauty and health brands for faster growth in Southeast Asian markets.

Why Cool Japan Fund matters

CJF is not a run-of-the-mill investor. Established to promote Japanese culture and products abroad, it brings sectoral and diplomatic heft in addition to capital. For Konvy, CJF’s participation is less about the check and more about the pathway it opens to Japanese manufacturers and brand owners who want an on-ramp into Southeast Asia.

The partnership is bilateral. Konvy gains privileged access to suppliers and products; CJF gains a distribution partner that understands the nuances of Southeast Asian consumer tastes and the region’s varied commerce landscape. For Japanese brands, this is valuable: Southeast Asia’s demand for curated, higher-quality personal care products is rising, but navigating marketplaces, social commerce and offline retail across multiple countries is operationally complex.

Expanding into the Philippines and Malaysia

Konvy’s roadmap is to use the Thai playbook to scale in the Philippines and Malaysia. Both countries present attractive demand-side dynamics, rising middle-class consumption and a growing appetite for curated beauty offerings. Still, they also pose structural challenges such as fragmented distribution, payment preferences and language differences.

Also Read: Beauty’s next big bang: Why beauty tech collaboration holds the key to a US$590B future

Konvy plans to transplant its omnichannel model, but it cannot simply replicate operations wholesale. The company must adapt marketing, product selection and fulfilment to local tastes and logistics networks. That will require both local hires and partnerships with regional players, alongside investments in customer insights to avoid treating the region as homogeneous.

Market observers note that social commerce is particularly potent in the Philippines, where influencer-led buying and chat-based transactions remain central. Malaysia, meanwhile, presents a multicultural market with diverse regulatory environments for cosmetics and supplement categories. Konvy’s stated intention to combine marketplace listings, social commerce and offline retail suggests it understands these nuances; execution, however, will determine success.

Private labels and exclusive distribution

Part of Konvy’s pitch is its ambition to scale private-label brands through exclusive distribution agreements with established partners. Private labels offer higher margins and tighter control over assortment, but they also carry inventory and brand risk. Scaling private labels across countries means mastering local regulatory frameworks for product formulation, labelling and claims.

Exclusive distribution plays to Konvy’s strengths in logistics and marketing. By offering select international brands a single point of entry into multiple Southeast Asian markets, Konvy can simplify expansion for brand owners. The firm claims it leverages proprietary consumer insights to help partners grow efficiently. If true, those insights, not just stock and channels, will be the sustainable moat.

Competitive landscape: crowded and fast-moving

Konvy is not the only player racing to aggregate beauty demand in Southeast Asia. Regional marketplaces, global platforms and a wave of vertical-first startups are all vying for consumers’ attention. Social commerce specialists and live-streaming vendors add another layer of competition, particularly for trend-driven and lower-priced items.

To carve out a defensible position, Konvy will need to convert Thai dominance into durable network effects: exclusive brand relationships, loyal customer cohorts and logistics economies across borders. The CJF tie-up could help lock in supply-side advantages, but it will not shield Konvy from competition on pricing, speed and marketing innovation.

Capital allocation and execution risks

US$22 million provides runway, but expansion across multiple countries, scaling private labels and beefing up fulfilment are capital-intensive tasks. Konvy’s playbook will likely require spending on warehousing, local teams, regulatory compliance and marketing, especially in markets where brand recognition is low.

Also Read: Thai beauty e-commerce firm Konvy bags US$10M from Insignia Ventures

Execution risks include misreading local product-market fit, underinvesting in payments and returns infrastructure, and failing to recruit credible on-the-ground partners. Rapid geographic expansion has sunk many once-promising e-commerce plays; Konvy must balance ambition with disciplined market testing.

What success looks like

If Konvy hits its targets, the company could become the default gateway for Japanese beauty brands entering Southeast Asia, a position that would create recurring revenue streams from exclusive deals and private labels, plus valuable consumer data. That would also make Konvy an acquisition target for larger regional platforms or strategic investors seeking category-specific distribution assets.

But success is not guaranteed. The company must demonstrate that its Thai model translates to countries with different cultural tastes, spending power and commerce behaviours. The winning formula will likely combine bespoke local product mixes, aggressive social commerce strategies, and frictionless logistics for both B2C and B2B clients.

Strategic bet, not a fait accompli

Konvy’s Series B is a strategic bet: leverage Thai success, partner with a Japan-focused fund to secure supply, and expand where rising middle-class demand meets digital commerce opportunity. The US$22 million will buy time and capacity, but the real test comes in execution.

For Southeast Asia’s beauty ecosystem, the deal matters because it signals continuing consolidation and the increasing importance of curated, omnichannel distribution models. For Japanese brands, Konvy’s rise offers a plausible route into regional markets without the headaches of building local distribution from scratch. For competitors, it raises the stakes: the marketplace is getting more selective about which partners can translate local leadership into regional influence.

The post Konvy bags US$22M to bring more Japanese beauty brands into SEA appeared first on e27.

Posted on Leave a comment

Fractional investing: Turning spare change into market exposure

Many of us are already buying life in pieces. We ride-share instead of owning a car, rent co-working desks rather than committing to a long-term lease, and use cloud storage instead of a server. You don’t buy an entire cow to enjoy a glass of milk, and the same goes for markets.

Fractional investing, which allows an investor to buy a portion of a unit in a share or ETF as opposed to a whole share, can help investors turn spare change into market exposure. It helps level the playing field, as it turns markets from something reserved for those with six-figure salaries into something anyone can participate in. 

More than just lowering barriers, fractions change the way people think about money. Every small order becomes a building block and a step forward, and accessibility, affordability and diversification stop being abstract concepts and start becoming part of everyday investing. They’re very useful for building the right mix, but still, fractional investing can be risky if it leads to lots of small buys without a plan.

An affordable entry point into capital markets

For young investors and fresh graduates, fractional investing can be their first real entry point into capital markets. Instead of waiting years to build a large lump sum, they can start with a small amount and still own a slice of global companies or funds. Fractions remove arbitrary minimums and let people size positions to conviction, not to whatever one full share costs. 

Also Read: Digital wealth platforms hit scale in SEA as foreign investing apps outgrow local rivals

This accessibility matters as it helps turn investing from an intimidating task into a habit that grows with an investor’s income. Fractional investing is also an affordable way for investors to build exposure with just a few spare dollars each month, as exposure can be built step by step and scaled up over time. Instead of chasing cheap stocks to build a portfolio as quickly as possible, new and less experienced investors can use fractions to steadily shape a portfolio that matches their priorities and long-term plans without committing to whole shares. 

An opportunity to dip into inaccessible assets

Fractional investing is a great way to ease into big-ticket names that would otherwise be out of reach. An example would be Berkshire Hathaway’s Class A shares, which have never been split and still trade at hundreds of thousands of US dollars each (not to be confused with its more affordable Class B shares). As another example, many of us might be familiar with Booking.com, a go-to travel platform, whose parent company Booking Holdings trades above US$5,000 per share. With fractions, investors can start small in such stocks without breaking the bank.

Fractions can also serve as a way to “test the waters” with new ideas. By trying a small position first to see how it behaves and then scaling up only if it fits the plan, this can help investors to grow their confidence gradually, instead of rushing into trades they may not be ready for.

Diversification remains key

The catch is that while fractions make investing easier to start, they don’t remove the need for discipline. The risk is less about the tool itself and more about how people use it. Think of it like cai png (economy rice): you could take five scoops, but if they’re all meat dishes, you haven’t built a balanced meal, just different forms of the same thing. Diversification means mixing in some vegetables, tofu, or maybe even a fish dish. 

Also Read: From clicks to conversations: Why your next customer in Southeast Asia is an AI agent

In markets, US$30 across five tickers can look like a variety, but if they’re all the same theme (such as US mega-cap tech), you’ve built a single-factor bet with extra steps. The rules don’t change just because the tickets are smaller. After all, a risky company is still risky, and a well-diversified fund is still exactly that. 

For Singaporeans, the hard part isn’t access to investing, but how we assemble and maintain our portfolios. We like to say we’re kiasu and overly cautious, but the numbers tell a different story. Many Singaporeans are running barbell portfolios — very safe on one end and very bold on the other. What’s really missing is the middle: a steady, diversified core that compounds quietly. 

Fractional investing can help fill this middle without turning investing into a second job. Put simply, the edge isn’t finding the next big thing; it’s building a middle that survives the quiet, ordinary months. Decide the mix, set a monthly routine, review on schedule, fine-tune as you go, and ignore the noise in between. Let the fractions do the quiet work while you get on with the things that matter the most. 

The views and opinions expressed are solely those of the author and do not constitute financial, investment, or professional advice.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image credit: Canva

The post Fractional investing: Turning spare change into market exposure appeared first on e27.

Posted on Leave a comment

Your next hire might not be human and that realisation changes everything

I did not have a big “AI moment.” No dramatic reveal. No boardroom decision where we said, “Alright, let’s replace this role with AI.”

It was quieter than that. Started with something small like scheduling.

At one point, we were juggling multiple client calls across different markets, time zones, and team members. It sounds simple, but it wasn’t. Back-and-forth emails, missed slots, reschedules, overlaps. It took time. And more importantly, it took attention.

So we tried using an AI-powered scheduling assistant. Nothing fancy. Just something to handle availability, propose slots, send confirmations, and follow up if needed.

And within a week, that task was gone from our daily thinking.

No one needed to “own” scheduling anymore. It just… happened.

That was the first moment it hit me. Not in a scary way, but in a very practical one. Something we had assumed required a human, such as coordination, communication, and judgment, was now being handled well enough by a system.

Not perfectly. But well enough that we didn’t feel the need to step in.

That’s when the question shifted.

It wasn’t “can AI do this?”

It became “how many things like this exist in our business?”

We started experimenting more intentionally after that. Research was next. Instead of manually pulling together insights for proposals or campaigns, we used AI agents to gather initial data, summarise trends, and even suggest angles. Again, not perfect. But it reduced the starting friction.

Outreach followed. Drafting first-touch emails, structuring follow-ups, even suggesting subject lines. The team would still refine and personalise, but the heavy lifting was already done.

Reporting was another one. We tested workflows where AI could pull campaign data, summarise performance, and draft a readable report before a human ever touched it.

Individually, none of these felt groundbreaking.

But together, they added up to something bigger.

A layer of work, the kind that used to take a team’s time every single day, was quietly being absorbed.

What surprised me wasn’t just what AI could do. It was how quickly we adapted once we saw it working.

The initial hesitation wasn’t about capability. It was about trust.

The team didn’t push back because they were afraid of losing jobs. They hesitated because they weren’t sure where they fit in this new setup. If the system could draft, summarise, and coordinate, what was its role now?

Also Read: The hire you almost made: Why workflow outlasts hype

That’s something I had to address early.

We had to reframe how we saw work. The goal wasn’t to replace people. It was to remove the parts of the work that didn’t need their full attention.

Once that clicked, things changed.

People stopped seeing AI as something that takes away and started seeing it as something that gives back time, headspace, and energy.

But it wasn’t all smooth.

There were moments where AI fell short, and those moments mattered.

Context was the biggest gap.

AI could draft a decent outreach email, but it didn’t always understand nuance especially in B2B conversations where tone, timing, and relationship history matter. It could summarise data, but sometimes missed what was actually important.

And when things went wrong, they went wrong quietly.

That was the risk.

A human mistake is usually obvious. An AI mistake can look correct at a glance until it isn’t.

So we kept human checkpoints in place. Not because we didn’t trust the tools, but because we understood their limits.

Another thing we didn’t anticipate was the operational layer that came with it.

Someone had to think about prompts. Someone had to decide what “good output” looked like. Someone had to maintain consistency across tools.

AI didn’t remove management. It changed what needed managing.

If I’m being honest, the biggest shift wasn’t operational. It was mental.

It changed how I think about hiring.

A year ago, if we needed more output, the instinct was to hire. More clients meant more people. More work meant more hands.

Now, that assumption doesn’t hold the same weight.

If I were building a team from scratch today, I wouldn’t start by asking, “Who do I need?”

I’d start by asking, “What actually needs a human?”

Because not everything does.

The roles that feel most secure aren’t the ones tied to execution anymore. They’re the ones tied to thinking, judgment, relationships, and ownership.

Things that require context. Taste. Responsibility.

Everything else is… negotiable.

Also Read: Breaking barriers: Reimagining SME growth with practical AI strategies

And I don’t say that lightly.

I’ve built teams. I care about people. I understand what jobs mean beyond just output.

But ignoring this shift doesn’t protect anyone. It just delays the adjustment.

The reality is, AI agents are already here. Not as a concept, but as quiet operators inside workflows.

They’re not replacing entire teams overnight. But they are reshaping what teams need to look like.

Smaller. Sharper. More focused.

Less about doing everything manually, more about knowing what should be done manually.

If there’s one thing I’d say to another founder thinking about this, it’s this:

Don’t start with replacement. Start with relief.

Find the tasks your team quietly dreads. The repetitive ones. The ones that drain energy without adding much value.

That’s where AI fits best.

Not as a headline. Not as a strategy.

Just as a way to make work feel a little lighter.

And once you feel that shift, even in a small way, you start seeing your business differently.

Not everything needs a human.

But the things that do matter more than ever.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Your next hire might not be human and that realisation changes everything appeared first on e27.

Posted on Leave a comment

The velocity of obsolescence: Why technical debt is your greatest macro risk in 2026

In the business cycles of the past, obsolescence was a slow process. A factory or a retail chain had decades to depreciate its assets before a competitor rendered them irrelevant. But in April 2026, the timeline of decay has collapsed. We are living in the era of the velocity of obsolescence.

For the modern enterprise, the most dangerous line item on the balance sheet isn’t debt to a bank—it is technical debt disguised as innovation. As a founder who spends my days dissecting Enterprise Risk Management (ERM) systems, I see a recurring pattern: companies rushing to integrate agentic AI and hyper-automation on top of brittle, legacy foundations. They are building skyscrapers on top of 19th-century plumbing. In 2026, this isn’t just an IT headache; it is a macroeconomic risk that can wipe out market caps overnight.

The fragility of the AI wrapper

The last two years saw a gold rush of startups and enterprise features that were essentially wrappers around global AI models. At the time, it looked like rapid innovation. Today, it looks like a liability.

If your enterprise’s core intelligence is dependent on a third-party API that you don’t control, you have no moat. More importantly, you have no architectural resilience. When those underlying models update, pivot, or change their security protocols, your innovative feature breaks.

The risk here is systemic dependency. True enterprise SaaS in 2026 must be modular. It must allow you to swap your intelligence layer without tearing down your operational layer. At Prospero, we call this model-agnostic risk management. It’s the only way to ensure that your software doesn’t become obsolete the moment the next version of an LLM is released.

Identity as the new perimeter

In 2026, the network perimeter is dead. With the rise of remote work and decentralised AI agents, you can no longer protect your enterprise with a simple firewall. The new perimeter is identity.

Many enterprises are still struggling with fragmented identity systems. They have separate logins for their CRM, their HRIS, and their Risk Management tools. This fragmentation is a massive operational risk. A robust enterprise architecture requires a single source of truth for identity.

Also Read: Technology debt is the risk company boards keep deferring – until it becomes a crisis

This is why we obsess over seamless integration with protocols like LDAP and Keycloak. If your identity management isn’t integrated into your risk engine, you cannot automate safely. An autonomous AI agent is only as safe as the permissions it inherits. Without a unified identity layer, you are giving a “digital employee” the keys to the castle without knowing which doors they are opening.

The legacy AI crisis

We are now seeing the first wave of legacy AI—systems built in the 2023-2024 hype cycle that are now unmaintainable. They were built for demos, not for durability.

The risk of Legacy AI is twofold:

  • Data toxicity: AI models trained on unvetted or biased data that now produce hallucinations in critical risk reports.
  • Code bloat: Custom-built AI features that are so deeply hard-coded into the system that they cannot be updated without breaking the entire Enterprise Resource Planning (ERP) stack.

This is why security-by-design is the only sustainable path. Risk management shouldn’t be a module you “add” to your software; it should be the framework upon which the software is built. If risk isn’t in the code, it isn’t in the company.

Macro-implications: The cost of inflexibility

From a regional perspective, the companies that will lead ASEAN in the next five years are the ones that can pivot their business models in weeks, not years.

Also Read: Atome lines up US$345M debt as Southeast Asia fintechs shun equity

If your technical architecture is a monolith of technical debt, you are macro-inflexible. You cannot respond to new OJK regulations in Indonesia, you cannot integrate with the latest regional payment systems in Singapore, and you cannot scale your risk protocols to a new market.

In 2026, inflexibility = insolvency. The market is moving too fast for companies that are held back by their own legacy software.

Closing thoughts: Building foundations, not just features

The message to the community is a call for architectural rigour. As founders and leaders, we must resist the temptation to ship flashy features that add to our technical debt. Instead, we must invest in the boring, difficult, but essential work of building integrated enterprise foundations.

We need systems that are modular, sovereign, and identity-centric. We need risk management that is baked into the architecture, not slapped on as a post-script.

The winners of 2026 won’t be the ones with the most AI bells and whistles. They will be the ones who built antifragile systems—platforms that can evolve as fast as the market, secure as an army, and transparent as a glasshouse.

Stop building for the demo. Start building for the decade.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The velocity of obsolescence: Why technical debt is your greatest macro risk in 2026 appeared first on e27.

Posted on Leave a comment

Featherless.ai wants to make AI model switching as easy as streaming Netflix

Featherless.ai founder and CEO Eugene Cheah

Featherless.ai, a US-headquartered startup founded and led by Singapore-born CEO Eugene Cheah, has a blunt mission: make the messy, fast-changing world of open-source AI easy to run in production.

The company recently raised US$20 million in Series A funding co-led by AMD Ventures and Airbus Ventures, and plans to use the capital to scale global infrastructure, launch a marketplace for specialised open models and deepen hardware integrations to cut inference costs.

In plain English, Featherless helps companies run lots of open-source AI models quickly, cheaply and safely, without forcing them to rely on one giant model or on a single cloud vendor.

Also Read: Featherless.ai secures US$5M to make AI inference faster and cheaper

What sets it apart is an operational promise that sounds almost magical: hot-swapping models in under five seconds, compared with the typical 30 minutes on a GPU. It’s a capability that, if it works at scale, could change how organisations deploy models: from one-size-fits-all behemoths to specialised fleets tailored to discrete tasks.

How hot-swapping actually works

Cheah explains the technical rethink that enables rapid model swaps. “Most inference providers treat each model like a standalone deployment. Load the full weights, warm up the runtime, and serve. Each requires hours of setup. That works fine if you’re running one model. We run over 30,000. And we plan to scale to millions; you can’t have millions of GPUs on standby for every model,” Cheah says.

Featherless’s approach is a systems-level redesign. Models live in hot, warm or cold states across a multi-tier cache and memory-management layer covering the GPU fleet. When a request targets a model that isn’t resident, the platform “hydrates” it from a pre-optimised checkpoint rather than raw weights, an optimisation that dramatically reduces load time.

Three engineering pillars make this possible: normalising and quantising weights at ingest time, proprietary storage and memory-loading techniques for GPUs, and a demand-prediction scheduler that pre-stages models before requests arrive.

There are trade-offs. “The first inference on a freshly swapped model carries slightly higher latency, a few hundred milliseconds more than a model that’s been sitting warm for hours. In practice, users don’t notice. The real trade-off was engineering effort,” Cheah says. The payoff is higher utilisation and lower cost, especially in environments that require many specialised models rather than a single monolithic system.

Model pluralism in practice

Featherless pitches itself as an antidote to the “one-model-to-rule-them-all” mindset. The platform lets enterprises define intents —for example, code generation, German customer support, or compliance summarisation—and Featherless routes those intents to the best-fit model, with fallbacks and failover chains.

“Model pluralism should not mean operational pluralism,” Cheah says. “The whole point of 30,000 models is that you always get the right one. But the system delivering it should feel like one thing, not 30,000 things.”
Practically, customers run a thin orchestration layer that maps business tasks to Featherless endpoints; the platform handles selection, versioning and serving. Monitoring is unified around tasks rather than individual models, making A/B testing and swaps painless.

Quality, safety and languages

Offering a vast catalogue of open models creates obvious questions about safety, bias and multilingual performance. Featherless applies a layered curation approach: automated screening for licences and architecture checks, inference health tests, and surfaced metadata to help teams make informed choices. Enterprise customers can add stricter tiers: bias benchmarking, multilingual audits and consistency testing.

Also Read: Will the rise of AI mean the ‘termination’ of humankind?

“We don’t claim perfect parity; that would be dishonest given the state of the field,” Cheah says, acknowledging the uneven quality of models across languages. The firm’s history with RWKV (a model architecture designed for multilingual efficiency) informs both research and serving decisions. Featherless stresses transparency: training data provenance, benchmark results and limitations are made available so customers can match models to their needs.

Low-resource and morphologically complex languages pose extra challenges. There’s less high-quality training data, tokenisation can be inefficient and standard transformer architectures hit scaling limits for long contexts. Featherless evaluates models across language families with standardised benchmarks and works with customers to build task-specific evaluation datasets. The company is careful not to promise parity when the underlying data and modelling aren’t yet in place.

Sovereignty, hardware and regional strategy

Featherless frames “AI sovereignty” as a three-layer problem: data residency, model provenance and hardware dependency. On the first layer, the solution is straightforward: deploy where data must stay. On the second, open models make provenance auditable and replaceable. The third layer, hardware, is the trickiest: much of production AI today runs on a proprietary stack dominated by a single vendor.

“That’s why our AMD partnership and ROCm investment isn’t just commercial; it’s strategic,” Cheah says. Featherless aims to prove the stack can run on open hardware with open software, reducing vendor lock-in at the compute layer.

The company is bullish on Southeast Asia’s potential for AI: pragmatic regulation, mobile-first engineers accustomed to multilingual products, and geographic proximity to major compute hubs. The weak points are familiar — insufficient regional GPU capacity and shallower venture capital, and Cheah calls for public-private investment in compute and model development tailored to local needs.

Governance, audit trails and compliance

Featherless recognises enterprise concerns about reproducibility and auditability. Bitwise reproducibility across GPU runs is difficult due to non-deterministic floating-point behaviour; Featherless prioritises practical reproducibility. “Pinned model versions, fixed quantisation configs, seeded sampling parameters. Same model version + same config + same seed = same output,” Cheah says. The platform version tracks every model configuration and logs model IDs, version hashes, configurations, and routing metadata for each request. Enterprises can also opt for private deployments so data never leaves their perimeter.

Handling licences and problematic training data is treated as a transparency exercise rather than a legal shield. Models are classified by licence at ingest, customers see licence details up front, and enterprise customers can filter models by licence category. Featherless maintains a watch list for models with provenance concerns and highlights models trained on explicit public-domain or licensed datasets.

When one model fails

Cheah offers a concrete example to illustrate the costs of a single-model approach. A Series B fintech used a single large closed model for everything—chatbots, transaction categorisation, and compliance summarisation. Over time, costs ballooned, latency rose during peak traffic, and GDPR obligations complicated European expansion.

Also Read: AI adoption is an area of maturity for SMEs, but they have advantage over big corporations: Aicadium

After decomposing workloads across Featherless, the company saw roughly a 65 per cent reduction in total inference costs and substantial latency improvements: conversational workloads were moved to a smaller, faster model (latency down 70 per cent, cost down 80 per cent for that workload), compliance tasks ran on a long-context model in the EU, and categorisation moved to a lightweight classifier. Importantly, governance became tractable.

Risks and the road ahead

Cheah is candid about the threats to Featherless’s thesis: hyperscalers undercutting pricing, consolidation of model development, hardware disruptions and an edge shift where devices handle more inference. His response is to double down on neutrality, breadth of catalogue, optimisation depth and vendor-agnostic engineering. “Open models win, inference needs to be efficient, neutrality matters. Those hold regardless of which specific risk plays out,” he says.

Featherless’s bet is operational: make it trivial to run many open models reliably, cheaply and compliantly across geographies and hardware. If that works, customers can stop shoehorning every problem into a single massive model and instead use the right tool for each job. It’s a practical vision that leans on engineering rather than hype — and that may be precisely what enterprises need as the AI landscape fragments into dozens, hundreds or thousands of specialised models.

The post Featherless.ai wants to make AI model switching as easy as streaming Netflix appeared first on e27.

Posted on Leave a comment

The truth behind the CLARITY Act lobby blitz: Crypto to the moon or banks compromise

The digital asset market currently reflects a complex tapestry of legislative hope and aggressive capital rotation. Total market valuation climbed 2.08 per cent in just 24 hours, reaching US$2.74T. This move aligns closely with traditional finance, as evidenced by an 87 per cent 30-day correlation with the S&P 500 index. While many observers look to pure technical indicators, the underlying strength stems from a growing belief that the CLARITY Act will finally establish a federal framework for the industry.

This optimism acts as a tailwind for prices even as a shadow looms in the form of a last-minute offensive from the traditional banking sector. The current rally suggests that participants are beginning to price in the possibility of a regulated future, even as the establishment fights to maintain its grip on dollar deposits and payment flows.

Capital is clearly searching for higher returns beyond the established giants. The Altcoin Season Index jumped 4.26 per cent in 24 hours and 22.5 per cent over the week to reach a level of 49. This indicates a significant shift in trader behaviour, as capital flows into higher-beta assets with specific growth stories. Sui serves as a prime example of this trend, as its price surged by over 24 per cent. A Nasdaq-listed firm decided to stake 108.7M tokens, which represents 2.7 per cent of the total supply.

This move created an immediate supply shock by removing millions of tokens from the active sell side. Combined with the announcement that African fintech giant Paga would integrate with the Sui network, the asset demonstrated that targeted adoption news now outweighs general market movements. Traders are no longer just buying the broad market. They are hunting for specific catalysts and supply dynamics that can deliver outsized gains.

Bitcoin itself continues to hold the line at US$82,139.04, marking a 1.83 per cent increase that tracks the broader market cap rise of 1.88 per cent. Trading volume for the leading asset spiked by 48.97 per cent. This confirms that the break above the US$82,000 psychological level has weight and attracts both retail and institutional participation. Data from derivatives markets suggests that leverage played a heavy hand in this climb. Open interest for Bitcoin futures surged past the previous all-time high set in 2025.

This influx of leveraged positions triggered a classic short squeeze, with short liquidations totaling US$23.93M in 24 hours. This represents a 16.67 per cent increase over the previous period. When short sellers face forced buybacks, they inadvertently push prices higher, creating a cascade of upward pressure. This feedback loop benefits spot holders but also increases the risk of a sudden reversal if the market becomes overextended on borrowed capital.

Also Read: Agentic economy: The real promise of AI and crypto convergence

Market indicators provide a nuanced view of this momentum. Data highlights that while the 14-day Relative Strength Index sits at 68.43, it has not yet hit the extreme levels that typically signal an immediate crash. Bitcoin dominance holds steady near 60.15 per cent. This suggests that the rally has not yet fully rotated capital into smaller tokens, despite gains in the altcoin sector. Social sentiment remains bullish with a net score of 5.21 out of 10.

Traders consistently highlight profitable trades in the altcoin market. Total open interest across all assets rose 6.07 per cent to reach US$451.72B. This shows that new money is entering the derivatives space to bet on further gains. These bets amplify price moves and ensure that volatility remains a constant companion for those navigating these markets.

The regulatory landscape remains the most potent driver for long-term sentiment and institutional trust. The CLARITY Act represents a rare moment of bipartisan cooperation between Senators Thom Tillis and Angela Alsobrooks. Their hard-won compromise focuses on a critical distinction for stablecoins. It prohibits passive, deposit-style interest but allows rewards tied to actual usage, transactions, or liquidity provision.

This framework would allow the industry to flourish while theoretically protecting consumers from the risks associated with unregulated shadow banking. Prediction markets like Polymarket now place the odds of passage at 75 per cent. Public support appears robust, with a HarrisX poll showing 52 per cent of voters favour the move. This legislation aims to reshore digital asset activity to American venues. Such a move could potentially end the dominance of offshore issuers like Tether and bring innovation back to domestic soil.

Traditional financial organisations are not watching these developments with indifference or passivity. Just 4 days before the May 14 Senate Banking Committee markup, powerful trade groups, including the American Bankers Association and the Bank Policy Institute, launched a concerted effort to derail the yield compromise. These organisations sent a joint letter urging senators to scrap the rewards carve-out entirely.

While they publicly cite consumer protection concerns, their internal analysis reveals a deeper fear about their own profit margins. These banks warn that yield-bearing stablecoins could drain enough liquidity from the traditional system to reduce consumer, small-business, and farm lending by 20 per cent or more. This battle is essentially a struggle for control over the future of dollar deposits and the rails of the global payments system.

The outcome of this markup will determine whether non-bank issuers retain the room they need for innovation or whether the United States remains with its current fragmented regime.

Also Read: Crypto-gold correlation hits 69%: Where smart money is rotating next

Timing is now the greatest risk for the pro-crypto camp and the broader market structure. If the Senate Banking Committee advances the bill without reopening the fight over yields, a July 4 signing target at the White House remains a realistic possibility. If the banking lobby successfully delays the markup beyond the May 21 Memorial Day recess, the entire effort could reset and lose its momentum.

Policy experts warn that missing this window could delay the development of clear rules until a new Congress takes office in the coming years. This uncertainty explains why social sentiment remains cautiously bullish at 5.21 out of 10. Traders are celebrating recent gains but remain wary of the political hurdles that lie ahead. The market is at an inflection point, where the durability of the current rotation hinges on whether leadership can maintain momentum amid institutional pushback from legacy finance.

Investors should recognise that this rally is not just a random price fluctuation. It is a reaction to a specific legislative shift that threatens the traditional banking monopoly. The push by banks to strip stablecoin rewards from the CLARITY Act proves that they see digital assets as a legitimate threat to their lending models and deposit bases. If the act passes in its current form, it will validate the point of view that clear rules and usage-based rewards are the true catalysts for the next phase of growth.

For now, the market is betting that the senators will hold their ground against the banking lobby. If they succeed, the shift of capital from Bitcoin into select altcoins with strong narratives will likely continue. If they fail, the industry may have to wait much longer for the clarity it needs to fully integrate with the global financial system and move away from its offshore roots.

The clash between the crypto market and the banking sector is reaching a boiling point. This is healthy for the end user, as it drives innovation and offers more choices about where and how to hold value. The coming weeks will reveal whether the legislative process can withstand the pressure from established interests or yield to the status quo. If the current momentum holds, we are witnessing the birth of a new era in digital finance.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The truth behind the CLARITY Act lobby blitz: Crypto to the moon or banks compromise appeared first on e27.

Posted on Leave a comment

Amplicity raises US$1M to turn idle backup batteries into profit engines

L-R: Amplicity co-founders Gabriel Schiano (CTO) and Stéphane Leyo (CEO)

Singapore-based Amplicity has secured US$1 million in a seed investment round from investors, including ENGIE, to commercialise a simple but increasingly compelling idea: the batteries sitting inside data centres and industrial facilities should not be treated as expensive ornaments waiting for a blackout.

The startup builds a control layer that allows sites to use existing or planned battery systems, including UPS infrastructure, to cut electricity costs and earn revenue from energy markets without undermining backup readiness. The timing is perfect as operators across Asia Pacific are currently being squeezed from several directions at once: power prices remain volatile, grids are under pressure, and large energy users are facing sharper scrutiny over Scope 2 emissions.

Also Read: The surprising economics of orbital data centres — and the real solution

According to Amplicity co-founders, most backup batteries sit idle for more than 99 per cent of the time. In a region racing to build more data centres and industrial capacity, that is a lot of underused capital.

“For years, backup energy systems like UPS have been treated as passive insurance: essential but unproductive,” CEO Stéphane Leyo said.

That framing is neat, but the bigger story is not about idle hardware. It is about whether Asia’s next wave of energy infrastructure will be built from scratch or sourced from existing assets.

A regional problem hiding in plain sight

Amplicity is targeting a pain point that is especially visible in Southeast Asia. The region’s electricity demand is still climbing, while its digital infrastructure footprint is expanding fast. Singapore remains one of Asia’s most important data centre hubs even under tighter efficiency rules, while nearby Johor and Batam are benefiting from spillover demand. Indonesia is building out its own data centre and industrial estate capacity.

Australia, meanwhile, has become one of the world’s most active markets for battery economics, thanks to its volatile wholesale power market and mature ancillary services opportunities.

In all these markets, resilience matters. Data centres, semiconductor plants, logistics facilities and large industrial sites cannot afford downtime. That means backup batteries are already widespread. The problem is that they are usually sized for emergencies, then left untouched except for periodic testing.

From an engineering perspective, that has long made sense. From an economic perspective, however, it increasingly looks wasteful.

That is the opening Amplicity wants to exploit. Its software sits on top of those battery assets. It aims to do two things at once:

  1. Shave costly on-site demand peaks
  2. Where market rules allow, dispatch battery capacity into energy or grid-service markets to generate recurring income.

The addressable opportunity is not small. Asia Pacific is one of the fastest-growing regions for both stationary storage and data centre construction. The data centre UPS market alone is already worth billions of US dollars globally, with Asia accounting for a meaningful share. Add commercial and industrial battery systems, and the battery hardware footprint that could theoretically be optimised runs into the many billions. The software, services, and revenue-sharing layer built on top of that is easily a large regional opportunity in its own right.

Also Read: The AI server boom in Southeast Asia: Why data centres are running out of power

In Southeast Asia specifically, the total addressable market (TAM) is less about selling more batteries than about monetising batteries that are already being installed for resilience or compliance reasons. That makes the sales motion more attractive in a capital-constrained environment.

Why Singapore and Australia matter

Amplicity’s initial focus on Singapore and Australia is not accidental.
Its home market Singapore offers a dense concentration of exactly the kind of customer the company wants: energy-intensive, uptime-obsessed operators under pressure to improve efficiency and decarbonise. Data centres in the city-state face land constraints, regulatory scrutiny and high expectations around energy performance. If Amplicity can prove that UPS systems can be run as economic assets without compromising mission-critical operations, Singapore becomes a strong reference market.

Australia is different, but arguably even more lucrative in the short term. Its electricity market is far more dynamic, with greater price swings and a deeper set of opportunities for batteries to earn money through arbitrage and grid services. A battery that is economically attractive in Singapore can become materially more valuable in Australia if it is exposed to the right market signals. For a startup trying to show hard returns, this is crucial.

Together, the two markets provide a useful test bed: Singapore for operational credibility with demanding customers, Australia for energy-market monetisation.

ENGIE’s upside goes beyond venture optics

ENGIE’s continued presence on Amplicity’s cap table is also strategically important. For the French energy giant, backing a company like Amplicity is a way to deepen its position in distributed energy, behind-the-meter optimisation and customer-facing decarbonisation services.

ENGIE already operates across energy supply, services and infrastructure. A company like Amplicity gives it another lever: the ability to unlock flexibility from customer-owned battery fleets without having to fund or own all the underlying hardware. If those batteries can be orchestrated safely at scale, ENGIE benefits from a stronger customer proposition, new service revenues and potentially more flexibility to support energy trading or retail operations where regulations permit.

In plain English, Amplicity gives ENGIE a software-led route to value that would otherwise remain trapped in backup systems.

Not a white space market

Amplicity is not entering an empty field. Globally, energy storage optimisation and distributed energy management are already crowded categories. Fluence, Stem, Wärtsilä, Schneider Electric, Eaton, ABB and Vertiv all operate somewhere along the spectrum of battery control, microgrid management, site energy optimisation or resilience infrastructure. Some of them are enormous. Schneider Electric, ABB and Eaton are industrial heavyweights with global reach, while Fluence has built a large listed energy storage platform. Stem became one of the better-known software-led storage players in the United States, even if that segment has had a bruising few years.

In Asia and Australia, the picture is similarly active. Utilities, aggregators, and energy service providers already monetise batteries through virtual power plants, demand response programmes, and ancillary services markets. What makes Amplicity slightly different is the narrowness of its wedge. Rather than leading with new battery deployments, it focuses on extracting value from backup and UPS assets customers already have or were going to buy anyway.

That distinction matters because mission-critical operators are often willing to consider software layers and performance-based commercial models long before they are willing to rip out their energy architecture.

The decarbonisation case is real, but not automatic

Amplicity also pitches a climate angle, and this deserves a more sober reading than startup boilerplate usually gets.

Batteries do not reduce emissions by default. If they charge from a fossil-heavy grid at the wrong time and discharge later without displacing dirtier generation, the decarbonisation benefit can be limited. The value comes from how they are controlled.

Amplicity’s case is that smarter battery dispatch can reduce peak demand, shift consumption away from more carbon-intensive periods, help integrate more renewable power and reduce the need for peaking generation. For companies measured on Scope 2 emissions, that can translate into verifiable improvements, especially if battery operation is tied to auditable reporting. In data centres and industrial sites, where electricity demand is both large and visible, even modest efficiency and load-shifting gains can matter.

Also Read: The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis?

That is why this is more than a niche optimisation play. It sits at the intersection of energy cost management, grid flexibility and corporate decarbonisation.

US$1 million is a modest round by clean-tech standards, and Amplicity still has to prove that site operators will trust a young company with assets designed for worst-case scenarios. But the thesis is hard to dismiss. Asia is adding more batteries, not fewer. The grid is becoming more complex, not less. And businesses are less willing than ever to leave expensive infrastructure idle just because that used to be standard practice.

For Amplicity, the bet is that the next big energy asset in the region is not a shiny new battery farm. It is the one already sitting in the basement, waiting for somebody to give it a job.

The post Amplicity raises US$1M to turn idle backup batteries into profit engines appeared first on e27.

Posted on Leave a comment

Your AI strategy isn’t broken, your leadership structure is

I’ve sat in enough AI strategy meetings to know what the real problem is. It’s not the technology.

Your AI strategy isn’t broken. Your leadership structure is. 95 per cent of AI pilots fail to deliver ROI. The problem isn’t the model — it’s who owns the decision, and whether they can explain it.

The models work. They genuinely do. I’ve watched teams demo AI systems that are genuinely impressive — fast, accurate, commercially relevant. And then I’ve watched those same teams, six months later, quietly shelve the project because no one could answer a simple board question: who’s accountable when this goes wrong?

That question kills more AI initiatives than bad data ever will.

The 2026 AI & Data Leadership Benchmark puts a number on it: 95 per cent of AI pilots fail to produce measurable business value. Read that again. Not 30 per cent. Not 50 per cent. Ninety-five. And in almost every postmortem I’ve seen, the failure wasn’t technical. It was structural — absent accountability, unexplainable decisions, and governance that arrived about eighteen months too late.

Here are the five places where that structure breaks — and what it actually takes to fix it.

You can’t explain the decision. That’s your problem now.

There’s a version of this that sounds abstract until it happens to you. Your AI system made a decision — a pricing call, a credit rejection, a hiring shortlist — and now someone is asking you to explain it. Not the engineering team. You.

The National CIO Review found that 90 per cent of CIOs say their professional reputation now directly depends on AI outcomes. And 85 per cent say missing traceability has already killed or stalled projects they were responsible for.

This is the trap nobody talks about clearly enough: when you adopt AI for its speed, you inherit accountability for outcomes you may not be able to reconstruct. The model moved fast. The audit trail didn’t.

What actually helps:

Build the decision trail before the first production deployment — not after the first crisis. Name a human owner for every AI system that touches pricing, hiring, credit, or customer outcomes. Make sure that the person can explain the decision logic in plain language to a board that didn’t ask for a technical briefing.

Your autonomous agent is making decisions. Does anyone know in what order?

Agentic AI is the shift that snuck up on a lot of leaders. We went from “AI that recommends” to “AI that acts” faster than most governance frameworks could follow. These systems now schedule meetings, draft contracts, initiate vendor communications, and escalate purchase orders — all without a human in the loop.

Forrester estimates 60 per cent of Fortune 100 firms will appoint a dedicated Head of AI Governance by the end of 2026, specifically because of agentic risk. That’s not a trend. That’s a fire alarm.

Also Read: Burning billions: AI’s capital frenzy and its global implications

The question I ask every leadership team deploying agentic AI is simple: What’s the trigger? At what point does the agent pause and ask a human? Most teams don’t have a clean answer. That gap is where the expensive mistakes happen.

What actually helps:

Before you go live, document the categories of action that require human confirmation. Build an immutable log of what the agent does. Test the rollback. The escalation protocol isn’t a nice-to-have — it’s the only thing standing between you and a consequential autonomous decision you can’t reverse.

When everyone owns AI, a board can smell it immediately.

I’ve heard this framing in more organisations than I can count: “AI is a shared responsibility across the business.” It sounds collaborative. It is, in practice, a way of ensuring no one is actually responsible.

Only 38 per cent of companies have a unified AI leadership role. Of the organisations consistently reporting strong AI ROI, nearly all have one thing in common: a single person accountable for AI outcomes — not AI tools, not AI infrastructure, AI outcomes — with a direct line to the CEO.

JPMorgan put its AI executive on a 14-person operating committee reporting directly to Jamie Dimon. That’s not a benchmark to aspire to. That’s the baseline for what serious AI governance looks like in 2026.

What actually helps:

Stop distributing accountability as if it were a feature. Appoint one person. Give them the CEO reporting line. The internal fight over whether this role belongs in technology or in business is worth having — because whoever wins that fight is setting the strategic agenda for the next decade.

AI is answering faster than you’re thinking. That’s a design flaw, not a feature.

UNSW Business School research tracks over 150 cognitive biases that affect human decision-making. The one that concerns me most in an AI context is anchoring — the way we over-weight the first recommendation we encounter. When AI surfaces a recommendation in milliseconds, it anchors your thinking before your own independent analysis has even started.

McKinsey puts it precisely: AI is most valuable when it augments human judgment. But its speed creates structural conditions that suppress the formation of independent judgment in the first place. I’ve watched executives nodding along to AI recommendations they hadn’t actually interrogated — not because they were lazy, but because the workflow was designed to move fast and the AI output was already on the screen.

What actually helps:

Redesign the sequence. Human analysis first. AI recommendation second. For any decision that’s high-stakes and hard to reverse, make the independent view a governance requirement — not a personal discipline that gets skipped when the calendar fills up.

Also Read: GenAI adoption is rising in Asia, but ROI remains elusive: Adobe

Governance built early is a moat. Governance built after a failure is just damage control.

The framing I hear most often is wrong: that governance slows AI down, that it creates friction, that it’s something you layer on once you’ve proven the use case. Every organisation I’ve seen operate from that assumption has eventually paid the price — in stalled deployments, board confidence erosion, or a regulatory intervention that arrived without warning.

The organisations generating genuine AI ROI in 2026 built governance early — before regulation demanded it, before a failure event required it. They report faster deployment cycles now, not slower. Because the board trust was already there. The audit trail was already there. When regulatory scrutiny arrived, it was a conversation, not a crisis.

What actually helps:

Take the governance investment to your board as a compounding asset with a payback period — not a compliance cost. The EU AI Act and its equivalents are converting what was a differentiator into a compliance floor. Build now, and you’re ahead. Build when you’re forced to, and you’re just keeping up.

Also Read: Singapore’s AI adoption surges, but data complexity raises security risks: Report

The position worth taking

If I had to distil everything above into one claim worth staking a career on, it’s this:

AI is a governance problem disguised as a technology problem. And the leaders who solve governance first are the ones who will still be standing when the dust settles.

That’s not a framework from a consulting deck. It’s what the evidence actually shows — across MIT Sloan, McKinsey, the National CIO Review, Harvard, and IBM — when you read it without the vendor framing.

The five intersections above aren’t abstract. They’re the decisions sitting on your desk right now. How you treat them — as technology problems or as leadership problems — will define what your AI story looks like in two years’ time.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Your AI strategy isn’t broken, your leadership structure is appeared first on e27.

Posted on Leave a comment

Singapore’s AI tools are ready. Its workforce isn’t

Singapore’s businesses have largely figured out how to buy AI tools. What they have not figured out is what to do with the humans sitting next to them.

That is the central finding of a sweeping new report by Accenture, released Monday, which warns that Singapore’s AI ambitions risk stalling not because of a tech gap, but a people one. Titled Singapore’s Growth Mandate: Why the AI future will be won or lost on people, not technology, the report draws on four research streams conducted between December 2025 and February 2026, and paints a picture of a nation caught between digital momentum and human inertia.

The headline numbers are, on the surface, encouraging. Nine in 10 Singapore enterprises have moved beyond merely exploring AI tools into active implementation. Nearly half have deployed generative AI within specific business units, and nearly three quarters are experimenting with or exploring agentic AI.

But peel back the tech layer, and the numbers grow uncomfortable. Only one in three organisations has a talent strategy that is fully aligned with its AI strategy. Nearly half of tech leaders surveyed admitted their companies had yet to redesign job roles or responsibilities at all.

Also Read: Amplicity raises US$1M to turn idle backup batteries into profit engines

The cost of this misalignment is quantifiable. Organisations that placed people at the centre of their AI transformation in 2025 grew revenue 1.8 percentage points higher and profits 1.4 percentage points higher than peers that did not. In a market as competitive as Singapore’s, that is not a rounding error.

Young workers, old assumptions

Perhaps the sharpest finding concerns the country’s entry-level workforce, a cohort that is ambitious, digitally native and, according to the research, being quietly set up to fail.

Entry-level job postings rebounded eight per cent in 2025, suggesting the labour market is upgrading rather than collapsing. But what those roles demand has shifted dramatically. Postings for entry-level ICT positions fell 38 per cent between 2022 and 2025, while demand for AI, machine learning, and data management skills accelerated sharply. Routine, repeatable tasks are being compressed. Roles that combine domain knowledge, analytical reasoning, and the ability to deploy AI tools are expanding.

Young Singaporeans sense the shift. Fully 95 per cent believe Singapore’s ambition to lead in AI is achievable. Yet just 31 per cent strongly agree that the ambition is genuinely people-centric. Their anxiety is specific: 81 per cent report beginner-level or zero understanding of prompt engineering — the skill most commonly cited as a gap — and 80 per cent report the same when it comes to AI ethics and governance. Nearly half worry about keeping pace with the speed of AI change.

The report reserves its most striking finding for last. Only 23 per cent of Singaporean employees genuinely trust their employer to act in their best interest when introducing AI tools, a figure that sits in stark contrast to a global benchmark of 83% from Accenture’s separate Pulse of Change research.

Also Read: Featherless.ai wants to make AI model switching as easy as streaming Netflix

That gap is not merely a morale problem. Trust, the report argues, is a hard operational requirement. When employees do not believe their organisations will invest in them through an AI transition, they disengage from upskilling. And 47 per cent of respondents already identify a lack of leadership support as the single biggest barrier to building AI fluency effectively.

Accenture frames the challenge as a leadership imperative, not a human resources task. Mark Tham, Accenture’s Country Managing Director for Singapore, said business leaders must elevate talent strategies to an equal — or greater — priority than technology adoption, noting the country has largely mastered deploying AI tools but has yet to grapple seriously with redesigning the work around them.

Prime Minister Lawrence Wong’s Budget 2026 pledge of “no jobless growth” in the AI era gives the report a pointed political context. Accenture’s conclusion is blunt: Singapore’s CEOs are, in effect, the implementation layer of that national mandate. The algorithms are ready. The question is whether the organisations built around them are.

Image Credit: Annie Spratt on Unsplash

The post Singapore’s AI tools are ready. Its workforce isn’t appeared first on e27.