Posted on Leave a comment

From pilot to production: Where robotics actually breaks

Rahul Nambiar, CEO and co-founder of Botsync

Industrial automation in Southeast Asia is moving beyond experimentation into real-world execution, and the stakes are rising quickly. In January 2026, Singapore-based robotics company Botsync secured additional Series A funding from SGInnovate, signalling growing investor confidence in the region’s smart manufacturing push. But scaling robotics is not just about deploying machines but about integrating intelligence into complex, live operations.

In this interview with e27, Rahul Nambiar, CEO and co-founder of Botsync, breaks down what it takes to move from pilot projects to multi-site rollouts, where automation delivers real ROI, and why orchestration is becoming the true battleground. As labour shortages intensify and manufacturing digitises, Botsync’s journey offers a closer look at how robotics is evolving from a technical solution into critical industrial infrastructure.

Also Read: What’s changing inside Southeast Asia’s factories with IsCoolLab

Edited excerpts:

What’s the single most important capability this additional investment buys: hardware, software, talent, or market access?

The capital allows us to invest further in expanding the orchestration and intelligence capabilities of our no-code and vendor-agnostic automation control platform SyncOS. This will ensure our users get the most optimal solutions possible for their robotic fleets.

You’re positioning Botsync as moving from “startup momentum” to “regional scaleup”. What operational bottleneck tends to break first when autonomous mobile robots (AMR) deployments move from pilot to multi-site rollouts in the region?

During a pilot phase, users primarily focus on technical feasibility and whether your product can work in their facility. The impact on their operations is very minimal.

After transitioning to a full rollout, this changes quickly, as you become a critical component in their operations. Users now care more about whether their operation key performance indicators (KPIs) are being met than whether the technology looks feasible or impressive. This involves handling edge cases, defining response timelines in the event of failure, optimising systems based on continuous feedback, and implementing business continuity plans in the event of system failure.

The operational elements of the company, whether we have hired the right support team, built the right processes, and built redundancy into our systems and processes, soon become as critical as the technology itself.

Your pitch leans on labour shortages and inefficiencies. In practice, where does automation deliver the fastest payback in warehouses here: picking, putaway, replenishment, line-feeding, or yard operations? Where do buyers still overestimate what robots can do?

The greatest value from automation occurs when, in addition to automating a manual process, the data it collects enables users to further optimise their operations. This could be in the form of reduction of error rates of operation, ensuring accurate prioritisation, reduction of order fulfilment time, etc.

Also Read: 🤖Rise of the machines: 20 robotics startups shaping Southeast Asia’s future

In Botsync’s case, we leverage the integration enabled by SyncOS and the data we collect from multiple machines at each stage of production to ensure the accurate, timely delivery of parts between assembly lines and the warehouse within factories. This allows us to deliver value by ensuring higher manufacturing process uptime and better visibility into the entire process, in addition to the physical automation we provide.

Two hundred and thirty per cent revenue growth is big, but growth can be “cheap” or “expensive”. What’s driving it: larger contract values, more sites per customer, better margins, or simply more hardware shipped?

We are seeing revenue growth from these areas:

  • Larger expansion within the same site of a customer
  • Expansion to new sites by the same customer
  • This has also allowed us to ensure customer acquisition costs are managed as we scale up.

Botsync works across manufacturing, warehousing, and intra-logistics. What’s your core product wedge today: fleet management software, the robots themselves, integration services, or a full-stack automation solution? How does that choice affect scalability?

Botsync’s primary product wedge today comes from the integration and process intelligence capabilities of SyncOS, which encompass fleet management, allowing AMRs and automated guided vehicles (AGVs) to communicate with other automation systems such as robotic arms, programmable logic controllers (PLCs), and conveyor belts, and use AI to enable data-driven decision-making. This allows customers to maximise the efficiency of their deployed automation.

Singapore often talks about smart manufacturing and advanced automation. From your on-the-ground conversations with manufacturers and logistics players, where is policy genuinely accelerating adoption, and where is it still not translating into operational reality?

Singapore’s push for smart manufacturing and logistics automation is closely aligned with Manufacturing 2030, which aims to grow the sector by 50 per cent in value-added output and establish Singapore as a global hub for smart, green, and high-value manufacturing. Policies and funding have accelerated adoption among mid-sized manufacturers and third-party logistics (3PL) operators, while manpower constraints and tighter foreign worker quotas have made automation a commercial necessity. Budget 2026 further strengthens this drive, with expanded support under the productivity solutions grant (PSG) for AI and automation, the launch of National AI Missions and Council to coordinate sector-wide transformation, and continued RIE2030 investment in robotics, AI, and advanced manufacturing.

Despite these measures, adoption isn’t uniform. Legacy systems and fragmented operations continue to slow integration, and many companies that run successful pilots struggle to scale across multiple sites due to interoperability and workforce-readiness gaps. ROI expectations versus real-world deployment timelines also remain a challenge, particularly for smaller firms trying to translate grant support into measurable productivity gains.

Looking to 2026, what’s the biggest technical or commercial bet in your roadmap: multi-robot orchestration, richer perception and safety, interoperability with legacy systems, or moving towards robotics-as-a-service? What would make you change course?

Looking to 2026, this market insight shapes our biggest bets: multi-robot orchestration, interoperability with legacy systems, and enhanced intelligence to handle dynamic operations and edge cases.

Also Read: The transformative potential of humanoid robots: A VC perspective

Multi-robot orchestration is increasingly practical thanks to Singapore’s national robotics standards and testbeds, which enable coordination of heterogeneous fleets. Interoperability continues to be a challenge, as highlighted by IMDA’s AMR x Digital Leaders initiative, helping companies integrate new robotics with existing Warehouse Management Systems.

We continually assess the market landscape and customer needs, and we see growing demand for autonomous mobile robots (AMRs) and integrated robotics solutions. Our commitment remains to provide autonomous solutions tailored to our customers, and we would adjust our roadmap if breakthroughs in perception and safety, broader ecosystem standardisation, or shifts in customer priorities make alternative approaches more effective or efficient.

The post From pilot to production: Where robotics actually breaks appeared first on e27.

Posted on Leave a comment

The coming identity crisis of agentic AI

In the race to build autonomous AI agents—software that can book flights, negotiate contracts, execute financial transactions, or run entire workflows on behalf of humans—a quieter but equally critical debate is unfolding behind the scenes.

How do you identify and authorise an AI agent?

Right now, several major technology communities are attempting to answer that question simultaneously. Groups such as the World Wide Web Consortium, the OpenID Foundation, the Decentralised Identity Foundation, and the Trust Over IP Foundation are all exploring mechanisms for identity, authentication, and delegation in what many now call the agentic economy.

Each community brings its own philosophy.

The World Wide Web Consortium focuses on core web architecture and decentralised identifiers. The OpenID Foundation specialises in authentication protocols like OAuth and OpenID Connect. The Decentralised Identity Foundation builds open infrastructure for self-sovereign identity systems. Meanwhile, the Trust Over IP Foundation focuses on governance frameworks and trust networks.

Individually, each effort is valuable.

Collectively, they risk creating a fragmented identity landscape just as AI agents begin to proliferate across the internet.

And the stakes are high.

3If autonomous agents are going to operate in financial markets, government services, enterprise systems, and consumer platforms, the world will need a reliable way to verify who or what is acting.

Without that, the agentic internet could quickly become a chaotic ecosystem of unverifiable bots.

Why fragmentation is inevitable

The risk of fragmentation is not simply the result of organisational rivalry. It is largely structural. Technology evolves far faster than standards bodies.

Developers building agent frameworks today cannot wait three years for formal protocols to emerge. They will ship systems using whatever identity mechanisms exist — API keys, OAuth tokens, decentralised identifiers, or proprietary authentication models.

6Meanwhile, standards organisations deliberate carefully, balancing security, interoperability, and governance.

By the time a standard is finalised, the ecosystem may already have moved on.

This dynamic has played out before. The early internet saw competing encryption protocols, rival messaging systems, and incompatible browsers before a handful of dominant standards emerged.

The same evolutionary process may now be happening with agent identity.

Also Read: The digital lag: How traditional consulting is failing to grasp the agentic AI revolution

The real goal is interoperability

The instinctive response to fragmentation is often to call for a single universal standard.

But the internet rarely works that way.

Instead, it evolves through layers. Different technologies coexist, but they communicate through shared interfaces. Email servers may run different software, but they all speak SMTP. Websites may be built with different frameworks, but they all rely on HTTP and TLS.

The same layered model may be the best path forward for agent identity.

Rather than forcing convergence on one protocol, the ecosystem may need to focus on shared primitives that allow different systems to interoperate.

These primitives could include portable identity artefacts such as decentralised identifiers, verifiable credentials, and authorisation tokens.

An AI agent might authenticate using one protocol while presenting credentials issued by another system, with trust frameworks defining how those credentials are validated.

In other words, the agent identity ecosystem may look less like a single standard and more like a modular identity stack.

Open implementations matter more than documents

One lesson from past internet standards is that specifications alone rarely drive adoption.

Working code does.

Open reference implementations—wallets, credential exchanges, agent authorisation frameworks—can serve as anchors for the ecosystem. When multiple communities build on shared open-source infrastructure, fragmentation often resolves itself organically.

Developers gravitate toward tools that work. And once those tools gain momentum, standards tend to follow the architecture already in use.

The importance of cross-foundation collaboration

Another way to reduce fragmentation is simple: collaboration.

If the W3C defines core identity primitives, the OpenID Foundation could create authentication profiles for agents. The Decentralised Identity Foundation could build the supporting infrastructure. The Trust Over IP Foundation could establish governance frameworks that determine how trust is established between networks.

Also Read: Agentic AI in action: How Southeast Asia’s startups are turning constraints into strengths

This kind of layered collaboration mirrors how the internet itself evolved.

No single organisation built the web. Instead, a loose constellation of standards bodies, open-source communities, and industry alliances shaped its architecture over time.

Agent identity may require the same approach.

A new kind of digital identity

What makes the challenge especially complex is that agent identity is fundamentally different from human identity.

A human identity system answers questions like:

  • Who is this person?

Agent identity must answer additional questions:

  • Who authorised this agent?
  • What permissions does it have?
  • Who is accountable for its actions?

An AI agent booking a meeting might have minimal privileges. One managing supply chains or executing financial trades might have enormous authority.

Identity systems must therefore support delegation chains, where humans or organisations grant agents specific capabilities—and where those capabilities can be audited or revoked.

This problem sits at the intersection of identity, authorisation, and governance.

And no single standards body currently owns all three.

Competitive convergence

If fragmentation sounds alarming, history suggests it may also be necessary. Innovation often begins with competing ideas. Over time, the ecosystem experiments discard weak approaches and converge around the solutions that prove scalable and secure.

The early internet did not begin with cleanly aligned standards. Neither did cloud computing, mobile ecosystems, or cryptocurrencies.

Agent identity may follow the same trajectory.

A period of experimentation—messy, decentralised, and occasionally incompatible—may ultimately produce stronger systems than a prematurely unified standard.

The infrastructure of the agentic economy

As AI agents begin acting autonomously across the digital economy, identity will become one of the most critical pieces of infrastructure.

Without reliable identity and delegation mechanisms, autonomous agents cannot safely interact with banks, governments, enterprises, or consumers.

But solving the problem will require more than a single protocol.

It will require an ecosystem — a layered architecture where multiple standards, technologies, and governance models can interoperate.

Fragmentation may be unavoidable.

The real question is whether the communities building agent identity today can ensure that their systems eventually connect.

If they do, the agentic internet could become as interoperable as the web itself.

If they do not, the next generation of AI agents may inherit a fragmented identity landscape just as complex—and contentious—as the early days of the internet.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The coming identity crisis of agentic AI appeared first on e27.

Posted on Leave a comment

What should states do about Meta-national platforms

For decades, governments have regulated companies as market participants. Tax them. License them. Fine them. Break them up if necessary.

But what happens when a company stops behaving like a firm — and starts functioning like infrastructure?

Not infrastructure in the traditional sense of roads or ports. Digital infrastructure.

Platforms that:

  • Clear cross-border payments.
  • Allocate capital at scale.
  • Coordinate labour across jurisdictions.
  • Optimise supply chains in real time.
  • Govern participation through code.

In a previous piece, we explored how fintech and AI infrastructure startups could evolve into “meta-national” platforms — systems that operate across borders, arbitrage jurisdictions, and become indispensable to economic coordination.

The question now is no longer hypothetical. It is strategic.

What should governments do about them?

Ignore them? Co-opt them? Compete with them? Constrain them?

The wrong answer could accelerate fragmentation. The right answer could reshape sovereignty for the digital age.

  • First: Recognise the shift from firms to systems

Governments often treat large platforms as “big companies.”

That’s outdated.

Some fintech and AI platforms are evolving into:

  • Monetary rails.
  • Identity layers.
  • Capital allocation engines.
  • Labour coordination networks.

When a platform becomes the default system through which millions earn, transact, and allocate capital, it is no longer just a private actor.

It becomes systemic infrastructure. And systemic infrastructure carries sovereign implications.

The first mistake governments make is assuming this is simply a competition issue.

It’s not.

It’s an institutional evolution.

  • Second: Avoid reflexive overregulation

The instinctive response to systemic platforms is control:

  • Heavy licensing regimes
  • Data localisation mandates
  • Strict capital restrictions
  • Forced domestic hosting

These measures may protect short-term policy control.

But they also create fragmentation.

When digital systems are forced into rigid territorial silos, two outcomes emerge:

  • Platforms are designed around the restrictions and relocated strategically
  • Domestic innovation falls behind global infrastructure layers

Overregulation may weaken state control rather than strengthen it.

Meta-national platforms thrive in regulatory arbitrage environments.

If a government makes participation too difficult, the platform does not disappear.

It simply routes around.

  • Third: Compete through performance, not prohibition

Digital platforms gain legitimacy through performance:

  • Faster settlement
  • Lower costs
  • Better allocation
  • Higher reliability

If citizens prefer digital currency rails over domestic banking systems, the problem is not merely regulatory. It is competitive.

Governments must ask:

Why are users choosing external platforms?

  • Is domestic banking too slow?
  • Are remittance costs too high?
  • Is SME credit inaccessible?
  • Is regulatory friction excessive?

The long-term solution is not prohibition.

It is upgrading domestic infrastructure.

Central bank digital currencies (CBDCs), instant payment systems, open banking frameworks — these are performance responses, not defensive reactions.

States that compete on efficiency retain legitimacy. States that rely solely on restrictions lose them.

Also Read: The first Meta-nation won’t be a country — and it might be built in Southeast Asia

  • Fourth: Engage platforms as strategic actors

As platforms scale, governments should shift from viewing them as adversaries to recognising them as stakeholders.

This does not mean surrendering authority. It means acknowledging mutual dependency.

Fintech platforms can:

  • Expand financial inclusion
  • Reduce remittance friction
  • Enhance capital access for SMEs
  • Improve transparency in economic flows

AI infrastructure platforms can:

  • Improve supply chain resilience
  • Enhance economic forecasting
  • Optimise public resource allocation

Rather than defaulting to hostility, governments should create structured engagement channels:

  • Regulatory sandboxes
  • Joint policy forums
  • Public-private coordination frameworks
  • Crisis response integration

The goal is not to capture. It is alignment.

  • Fifth: Preserve monetary sovereignty strategically

The greatest vulnerability meta-national platforms create is monetary.

If large segments of a population transact primarily in stable digital assets outside domestic banking systems, central banks lose:

  • Policy transmission tools
  • Visibility into capital flows
  • Control over liquidity conditions

Governments should respond in three ways:

  • Develop a credible digital currency infrastructure
  • Modernise domestic payment rails
  • Ensure interoperability with global systems

Total exclusion is unrealistic.

Interoperability preserves influence.

If domestic systems can plug into global digital infrastructure, states remain relevant in layered sovereignty rather than being sidelined by it.

  • Sixth: Protect identity without over-centralising it

Digital identity is the next frontier of sovereignty.

If platforms control identity verification and reputation scoring, they influence access to credit, employment, and participation.

Governments should:

  • Develop strong, portable digital identity frameworks
  • Enable API-based integration with private platforms
  • Ensure privacy standards are competitive globally

Over-centralised identity systems risk fragility.

Underdeveloped identity systems risk irrelevance.

The balance is delicate — but critical.

  • Seventh: Prepare for layered sovereignty

The 20th-century model assumed sovereignty was exclusive. You belonged to one nation-state. Period. The 21st-century model is layered.

An individual may simultaneously belong to:

  • A territorial state (passport)
  • A digital monetary network
  • An AI-driven labour marketplace
  • A cross-border capital ecosystem

Governments should not attempt to eliminate these layers.

They should design policies assuming coexistence.

Layered sovereignty does not automatically erode state authority.

It reshapes it.

States that adapt will remain central nodes. States that resist entirely may find themselves bypassed.

Also Read: You’re designing the wrong thing: Why SEA founders should focus on decision environments, not culture decks

  • Eighth: Avoid turning platforms into geopolitical weapons

In an era of US–China rivalry, digital infrastructure is increasingly politicised. Export controls. Sanctions. Data restrictions. Capital scrutiny.

But weaponising infrastructure has consequences. If digital platforms are perceived as extensions of geopolitical blocs, adoption narrows. Neutral platforms become more attractive.

This dynamic is especially important for Southeast Asia.

The region thrives on strategic balance. Governments here should resist binary alignment pressures that turn infrastructure into ideological tools. Neutrality enhances economic leverage.

  • Ninth: Build domestic champions — but don’t cage them

Many governments aim to build national champions in fintech and AI. That’s sensible.

But overprotection can backfire.

If domestic startups are shielded from global competition through restrictive barriers, they may fail to scale beyond home markets.

Meta-national platforms require:

  • Cross-border functionality
  • Regulatory sophistication
  • Global trust

Governments should:

  • Support outward expansion
  • Encourage global compliance capabilities
  • Invest in regional interoperability frameworks

Champion-building should focus on capability, not containment.

  • Tenth: Redefine sovereignty for the digital era

The deepest shift required is conceptual.

Sovereignty is no longer defined solely by territory.

It increasingly depends on:

  • Control over infrastructure layers
  • Influence over protocol standards
  • Participation in global coordination networks

Governments that cling to a purely territorial model will struggle.

Those that embrace infrastructure diplomacy — shaping standards, fostering interoperability, and partnering with platforms — will remain central.

The meta-national future does not eliminate states. It challenges them to evolve.

The strategic choice ahead

Meta-national platforms will not announce themselves.

They will scale quietly:

  • Through adoption in emerging markets.
  • Through integration with global SMEs.
  • Through developer ecosystems.
  • Through performance advantages.

By the time governments recognise them as sovereignty-adjacent actors, they may already be embedded in economic life.

The choice for governments is not whether to allow them. They are already emerging. The choice is whether to:

  • Fight them blindly,
  • Partner strategically,
  • Or upgrade state capacity to compete.

The most resilient governments will do all three — selectively.

Because the next decade will not be defined solely by great-power rivalry between states.

It will be defined by the rise of infrastructure actors that operate across them.

The most powerful economic system in your jurisdiction may not belong to your central bank.

It may run on code.

And how governments respond will determine whether sovereignty fractures — or adapts.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

The post What should states do about Meta-national platforms appeared first on e27.

Posted on Leave a comment

While stocks rally, gold hits US$4,780 and crypto correlation tells a hidden story

The crypto market’s modest 0.57 per cent gain, bringing total capitalisation to US$2.35T over the last 24 hours, tells a story far more nuanced than the headline suggests. The strength of the Ethereum ecosystem drove this movement, with the network outperforming the broader market by a significant margin. This divergence matters because it reveals where smart capital currently seeks refuge and growth. The 46 per cent correlation between crypto and Gold further underscores a market positioning itself for inflationary pressures, even as traditional risk assets rally on geopolitical hopes. I see this not as contradictory behaviour but as a sophisticated reallocation in which digital assets serve dual roles: as vehicles for speculative growth and as emerging stores of value.

Ethereum’s outperformance stems primarily from an unexpected source: a major security incident on Solana. The Drift Protocol exploit, where an attacker extracted substantial value, triggered a fascinating capital rotation. The exploiter now swaps over US$270M in stolen Solana-based assets into ETH, creating tangible on-chain buying pressure. This dynamic illustrates Ethereum’s evolving role as the preferred settlement layer during periods of uncertainty across competing chains. Rather than fleeing crypto entirely, capital seeks the network with the deepest liquidity, most robust developer activity, and strongest institutional recognition. I interpret this as validation of Ethereum’s long-term thesis: security and decentralisation compound value over time, especially when alternatives face stress. The market rewards resilience, and Ethereum’s ability to absorb this inflow without significant slippage demonstrates the maturity of its infrastructure.

Beyond the hack-driven flows, broader sentiment around Ethereum is supported by credible institutional developments and clarity on the protocol roadmap. Franklin Templeton’s move to launch an institutional crypto division signals traditional finance deepening its commitment to digital asset infrastructure. This is not speculative noise but strategic positioning by a firm managing hundreds of billions. Simultaneously, Ethereum’s 2026 protocol upgrades, including Glamsterdam and Hegotá, provide a tangible catalyst for long-term holders. These upgrades promise meaningful improvements to scalability and user experience, addressing the very concerns that limit broader adoption. Meanwhile, speculative capital rotates into low-market-cap tokens like StakeStone and TrustSwap, which posted triple-digit gains. This risk-taking behaviour indicates healthy market appetite, though I caution that such moves often precede consolidation. The combination of institutional validation and retail speculation creates a supportive, if uneven, foundation for prices.

Also Read:The keys to your kingdom: Navigating crypto custody in 2026

From a technical perspective, Ethereum’s near-term trajectory hinges on its ability to reclaim the US$2,400-US$2,600 resistance zone. A confirmed close above the 50-day exponential moving average would signal strengthening momentum, potentially opening a path toward US$3,000. Immediate support rests near US$2,200, a level bulls must defend to maintain the current structure. I watch these levels closely because they reflect not just chart patterns but the collective psychology of market participants. The situation remains fluid pending further details on the Drift Protocol exploit. Any new information could alter the flow dynamics currently supporting ETH. Protocol upgrades also warrant attention: successful testnet deployments and clear timelines would reinforce confidence, while delays might trigger profit-taking. Technical analysis in crypto never operates in isolation; it intersects with on-chain data, macro sentiment, and narrative shifts.

This crypto market movement unfolds against the backdrop of a rallying global risk-asset market. On 2 April 2026, major indices posted gains as de-escalating tensions in the Middle East reduced the geopolitical risk premium. The S&P 500 closed at 6,575.32, up 0.72 per cent, while the Nasdaq Composite gained 1.16 per cent to 21,840.95, led by technology stocks. The Dow Jones Industrial Average rose 0.48 per cent to 46,565.74. Crude oil prices pulled back, with Brent futures falling 1.15 per cent to US$100.00 per barrel and WTI slipping to US$98.71 per barrel, as investors anticipated reduced risk of supply disruptions. Treasury yields edged higher, with the 10-year note yielding 4.33 per cent, reflecting capital rotation from safe-haven bonds into equities. Asian markets surged, notably South Korea’s KOSPI, which jumped 8.4 per cent. This global risk-on sentiment typically supports crypto, and Bitcoin traded relatively steady near US$68,103, suggesting digital assets currently follow idiosyncratic drivers more than broad equity beta.

Gold’s strength amid this risk-on environment deserves particular attention. Spot gold rose to approximately US$4,780.40 per ounce despite de-escalation headlines, indicating persistent demand for inflation hedges. The 46 per cent correlation between crypto and Gold suggests a segment of the market treats digital assets as complementary to precious metals in portfolio construction. I find this convergence logical: both assets offer alternatives to fiat currency systems, though through different mechanisms. Gold provides physical scarcity and historical precedent; crypto offers programmable scarcity and network utility. When investors allocate to both, they express a nuanced view: scepticism about long-term fiat stability coupled with confidence in technological innovation. This dual positioning explains why crypto can rise alongside traditional risk assets while maintaining a hedge-like correlation with gold.

Also Read: Breaking: US Labour Department opens door to crypto in 401(k) plans, market jumps 1.86%

The current market structure rewards selective participation. Broad index exposure may underperform focused positions in ecosystems demonstrating clear catalysts and resilient infrastructure. Ethereum’s dual role as a technological platform and a liquidity sink during cross-chain stress events positions it uniquely. I caution against overextrapolating short-term flows: the US$270M in exploited assets represents a transient catalyst, not a fundamental revaluation. Lasting gains require sustained developer activity, user adoption, and regulatory clarity. The convergence of institutional interest, protocol innovation, and macro hedging demand creates a compelling setup, but execution risk remains. I advocate for disciplined position sizing and continuous monitoring of on-chain metrics alongside traditional technical levels.

In this complex environment, my perspective emphasises independent analysis over narrative conformity. The market’s modest gain masks significant underlying dynamics: capital rotation among chains, shifts in institutional strategy, and macro hedging behaviour. These forces interact in ways that simple headlines cannot capture. I believe the next phase of crypto market development will reward those who understand network fundamentals, liquidity dynamics, and macro correlations simultaneously. 

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post While stocks rally, gold hits US$4,780 and crypto correlation tells a hidden story appeared first on e27.

Posted on Leave a comment

Circulate Capital’s US$220M fund targets Asia’s recycling gap

Singapore-based Circulate Capital has raised US$220 million in the first close of its second Asia-focused fund, a vote of confidence for circular economy investing at a time when much of the market’s attention and money has been swallowed by artificial intelligence (AI).

The new vehicle, Circulate Capital Asia II, has already reached more than 70 per cent of its US$300 million target, surpassing the firm’s first fund, which closed at US$188 million. The capital will be deployed into recycling and circular supply chain businesses across India, Indonesia, Thailand, Vietnam, the Philippines, and Malaysia, with a focus on plastics and packaging, as well as electronics and apparel.

Also Read: Circulate Capital makes final close of US$76M fund to advance circular economy for plastics

The investor line-up comprises strategic corporates, development finance institutions, pension-linked capital, and family offices. Returning backers include The Coca-Cola Company, Danone, Dow, Procter & Gamble, British International Investment, Proparco, IFC and Builders Vision, while new investors include EMCAF, Impact Fund Denmark, SIFEM and Australian Development Investments.

Why circular economy investing is growing

The broader investment case for circularity is getting harder to dismiss. Globally, companies are facing volatile raw material costs, supply chain disruptions, tighter environmental regulations, and rising pressure from customers and consumer brands to reduce waste. At the same time, the world still extracts more than 100 billion tonnes of raw materials each year while remaining only 7.2 per cent circular.

Asia is central to that story. The region combines rapid consumption growth with weak waste management systems and a manufacturing base that increasingly needs reliable, locally sourced recycled materials.

For investors, this creates a more direct commercial opportunity than the old sustainability pitch. Recycling, recovery, and reuse are no longer just about impact reports; they are about securing feedstock, reducing import dependence, and building more resilient supply chains.

Plastics remain the biggest entry point, but the market is widening. Investors are looking not only at mature recycling streams, such as PET, but also at harder-to-process materials, including polyolefins, flexible packaging, textiles, batteries and electronic waste.

Is AI crowding out circular economy funding?

In short, yes, but not completely.

AI is dominating global venture and growth funding, which makes life harder for circular-economy startups and infrastructure plays, especially in Asia, where many investors still prefer software-led models with faster scaling potential. Circularity businesses are usually more capital-intensive, more operationally messy and slower to mature. That is not exactly catnip for momentum-driven investors.

But the sector is not competing for the same pools of money. Much of Circulate Capital’s backing comes from corporates, development finance institutions, and impact-oriented investors with longer time horizons and strategic reasons to be in the market. For them, circular economy investing is less about chasing the next valuation spike and more about addressing supply chain risks, regulatory exposure, and material scarcity.

That distinction may help the sector keep growing even while AI hoovers up headlines.

Asia’s plastic problem is still severe

If anything, the region’s waste crisis remains underfinanced relative to its scale.

South and Southeast Asia generate vast volumes of plastic waste, while collection, sorting, and recycling systems often lag far behind demand. Low-value and flexible plastics remain especially difficult to recover at scale, and leakage into waterways and coastlines continues to be one of the region’s defining environmental failures.

Also Read: Circulate Capital joins bio-based plastic developer Algenesis’s US$5M seed round

Investors are paying more attention than they were a few years ago, but not enough to match the problem. Circulate Capital estimates that plastics alone represent a US$100 billion cumulative investment opportunity in collection and recycling infrastructure by 2030. That figure underlines the gap between what is needed and what has actually been deployed.

Where the market is heading

The next phase of circular economy investing in Asia is likely to move beyond straightforward bottle recycling into more complex areas: flexible plastics, alternative packaging, textile recovery, battery recycling, and the recovery of rare and critical materials from electronics.

That shift is important because the easiest opportunities have already been identified. The future will depend on whether investors can back businesses that not only process waste but also build dependable circular supply chains around it. The winners are likely to be firms that can supply recycled inputs to major manufacturers and consumer brands on an industrial scale.

Rob Kaplan, founder and CEO of Circulate Capital, said the firm’s track record shows the circular economy is “a sophisticated asset class that can deliver liquidity to private equity investors”.

Circulate Capital’s record so far

Circulate Capital said it has completed more circular economy deals in Asia than any other manager. However, it did not disclose the exact number of deals it has made since launch.

It did, however, point to exits as proof that the model can produce returns. Fund I has fully exited Indian digital waste management platform Recykal, and partially exited Lucro, a recycler focused on hard-to-manage flexible plastics, and Srichakra Polyplast, described as India’s first food-grade plastic recycler.

Since 2020, the firm says its Asia portfolio has added nearly 900,000 tonnes of annual collection and recycling capacity. Fund II aims to finance nearly two million tonnes more.

The bigger takeaway is that circular economy investing in Asia is no longer a fringe climate theme. It is slowly becoming an industrial and supply chain play. AI may still be the market’s favourite shiny object, but waste, unlike hype cycles, has a habit of sticking around.

The post Circulate Capital’s US$220M fund targets Asia’s recycling gap appeared first on e27.

Posted on Leave a comment

The hidden dangers of AI bias: Where it can go wrong

A 2025 study found that AI-generated summaries influenced users to make purchase decisions 84 per cent of the time, even though the summaries contained hallucinated or altered facts in up to 60 per cent of cases.

This is not just a technical flaw. It’s a product liability risk.

If your AI changes the sentiment of reviews and invents product features, it nudges users toward purchases. Then you are no longer just building an AI tool—you are shaping consumer behaviour in ways that may be misleading or even legally questionable.

AI Bias here is not just unfair—it’s conversion distortion.

Every dataset used to train AI systems is essentially a snapshot of the real world. But it’s a snapshot that comes with all the imperfections, prejudices, and historical inequalities of that world.

Let’s explore a few real-world cases where it has gone wrong

AI’s bias in selecting resumes for hiring

For instance, imagine you train an AI to recognise job applicants based on resumes.

If the data that the AI is trained on predominantly includes resumes from a certain demographic, the system may learn to favour that demographic only by reproducing and even amplifying existing biases.

This was precisely the problem with an AI used by a Google in the past to screen resumes.

Google’s AI hiring tool was trained on resumes that were submitted to the company over several years, which unfortunately had an overwhelming bias toward male candidates.

As a result, Google’s AI learned to favour male-associated words and traits, like “aggressive” or “competitive,” and ended up filtering out resumes from women.

The AI had simply learned the pattern of who was hired, not the traits that would have led to success for any candidate, regardless of gender. The algorithm did not have the nuance to recognise gender inequality and instead perpetuated it.

This example demonstrates that AI isn’t immune to the biases inherent in human decision-making. In fact, because it operates based on historical data, it often amplifies those biases.

Whether it’s racial, gender-based, or socio-economic bias, AI can end up supporting societal inequalities if not carefully controlled.

Also Read: The rise of invisible businesses: Why the most powerful companies may be built by one person and AI

Risk of over-optimisation

Another major problem in AI pattern recognition is over-optimisation.

This happens when an algorithm is trained too thoroughly on a specific dataset and ends up “memorising” the data rather than learning the underlying pattern.

As a result, the AI performs well on the data it was trained on but poorly when exposed to new, unseen data. This lack of generalisation can be particularly dangerous when AI is deployed in the real world, where data is constantly changing.

Take the example of an AI model trained to predict stock market movements. If it is trained on historical stock data that covers a period of rapid economic growth, the AI might learn to associate certain market behaviours with positive economic conditions.

However, if the economy shifts and a recession begins, the AI might not recognise the new patterns and could make disastrously inaccurate predictions. This is an issue of over-optimisation. The AI has learned patterns specific to one period in time, but cannot extrapolate useful information for a new scenario.

For example, Wealthfront, a robo-advisor that uses AI to manage investment portfolios, had an incident where its algorithm predicted a market correction and advised its clients to sell off stocks in anticipation of a downturn. However, the correction didn’t materialise as expected, and the stocks that were sold off ended up increasing in value.

AI was reacting to market indicators that pointed to a correction, but it failed to account for other factors, such as market sentiment and long-term trends. It was a case of model overfitting, where the algorithm focused too narrowly on historical patterns rather than adapting to evolving market conditions.

AI’s bias in healthcare at IBM

Imagine an AI that has been trained on a specific subset of medical data that doesn’t account for all possible patient conditions.

If that AI is used to make medical diagnoses in the real world, its inability to adapt to new conditions could result in missed diagnoses or, worse, fatal errors.

IBM’s Watson AI for Oncology was designed to help doctors diagnose and treat cancer by analysing medical data. However, it was revealed that the system was providing unsafe and inaccurate treatment recommendations, as it was trained on limited and biased data. In some cases, Watson made recommendations that didn’t align with clinical standards, and it struggled with real-world data complexity.

Lack of contextual learning

While AI systems are excellent at recognising patterns within the scope of the data they are trained on, they lack the ability to understand the context in which these patterns occur.

Humans have the capacity for empathy, ethical reasoning, and a broader understanding of the world, which is something that AI simply cannot replicate yet.

Also Read: The art of AI integration: Growing your business with chatbots and human expertise

AI’s bias in criminal justice

A glaring example of this is AI’s use in criminal justice, particularly in predictive policing. Predictive policing algorithms use historical crime data to forecast where crimes are likely to occur, in an attempt to optimise law enforcement resources.

However, these algorithms are prone to problematic outcomes because they don’t understand the socio-economic or political context behind why crimes are committed in certain areas.

For instance, if an AI system identifies a pattern where certain neighbourhoods have higher crime rates, it might suggest that police patrols be concentrated in those areas. But it may fail to account for systemic issues such as poverty, lack of education, or historical over-policing, which contribute to these higher crime rates in the first place.

Instead of addressing the root causes of crime, the AI ends up reinforcing a cycle of surveillance and criminalisation that disproportionately affects marginalised communities.

For example, the COMPASS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm was used in the US criminal justice system to predict the likelihood of recidivism (repeat offences) and inform parole decisions. Investigations found that the system was biased against Black defendants, giving them higher risk scores compared to white defendants with similar criminal histories.

In essence, the AI has no moral or ethical compass to guide its decisions. It simply follows the data, leading to outcomes that may perpetuate harm rather than reduce it.

The risk of “invisible” bias

One of the more deceptive aspects of AI’s bias is that it’s not always obvious. Often, AI systems are seen as impartial or objective because they aren’t influenced by human emotions, subjective opinions, or personal experiences.

However, the reality is that human biases are embedded in the design and deployment of these systems in ways that may be invisible to users.

Consider facial recognition Software in China. Chinese facial recognition technology has come under fire for disproportionately misidentifying certain ethnic groups. A recent study showed that in regions with minority populations, facial recognition models had higher error rates, leading to false arrests and discrimination.

While these issues might seem specific to the technology or country, they highlight a larger trend: AI systems built without local context or inclusive data can fail spectacularly when deployed at scale.

These biases often remain hidden because, to the untrained eye, the system “seems” to work fine when tested on a homogenous group.

This issue of invisible bias is compounded by the fact that the vast majority of AI models, especially those used in industry and business, operate as “black boxes.”

The decision-making processes of many AI systems are not transparent, meaning the users of these systems may have no idea how or why the AI made a particular decision.

When these decisions have real-world consequences, such as who gets approved for a loan or who gets hired for a job, there’s little accountability or recourse for those affected.

So, how to tackle these AI’s bias? Let’s find out some interesting solutions explored by a few startups here.

Also Read: AI and ethics in digital marketing: Building trust in the tech era

Pymetrics

A startup focusing on AI-driven recruitment tools introduced an ethical AI framework by using neuroscience-based games and algorithms that assess candidates’ cognitive and emotional abilities rather than relying on resumes or biased historical data.

They also partnered with the Fairness, Accountability, and Transparency community to ensure their models are regularly audited for fairness, ensuring that their system doesn’t perpetuate bias.

Impact: This approach provides a more equitable hiring process and has led to a more diverse and inclusive workforce for companies using their platform.

Truera

An AI explainability startup developed an AI model monitoring and auditing tool that not only explains model decisions but also helps identify and mitigate bias in machine learning models. The platform uses visualisations and diagnostics to show if certain demographic groups are disadvantaged by a given model.

Impact: By identifying hidden biases in complex AI models, Truera helps companies correct these issues before they impact real-world outcomes, promoting fairness in automated decisions.

Zest AI

It focuses on making AI-driven lending fairer by using an alternative credit scoring model that analyses a wider variety of factors, including behaviour and transaction history, instead of just traditional credit scores. They also continuously test their models for bias against different groups to ensure equitable access to financial services.

Impact: Zest AI’s methods have led to more accurate credit assessments, increasing loan approvals for underrepresented groups without increasing risk for lenders, thus reducing financial inequality.

H2O.ai

A startup known for its open-source machine learning tools introduced an automated tool that integrates with its platform to detect and mitigate bias. Their solution uses fairness constraints during training to ensure that models do not favour one group over another, regardless of sensitive attributes like race, gender, or age.

Impact: Their tool, “Fairness.ai,” has been adopted by companies looking to build more transparent and accountable models that are less prone to bias, enhancing trust in AI-powered decision-making.

One of the most important things to remember is that while AI has immense potential, it’s not inherently neutral or infallible.

Its power and effectiveness are entirely dependent on the way it is designed, trained, and used.

In nutshell

As AI continues to evolve, its ability to recognise and predict patterns will only improve.

The key lies in ensuring that the humans who design and deploy these systems are aware of these risks and work to make AI a force for fairness, equity, and progress. In the end, the true power of AI will be in its ability to enhance human capabilities, not replace them.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The hidden dangers of AI bias: Where it can go wrong appeared first on e27.

Posted on Leave a comment

Why 2026 will be the year AI moves from hype to mandatory safety infrastructure

Across Asia, the scale and intensity of industrial development have transformed its skylines, logistics corridors, and manufacturing capacity in less than two decades. Yet one issue still persists- safety systems have not matured at the same pace.

The numbers illustrate this pressing challenge. The Asia-Pacific region accounts for almost 63% of global workplace fatalities. The rate of fatal injuries has reached 12.7 deaths per 100,000 workers, which is four to five times higher than those recorded in Europe. The majority of these incidents occur in construction and manufacturing sectors, where dynamic environments, heavy equipment, and evolving site conditions create constantly shifting hazards.

As the issue of workplace safety still persists, Regulatory bodies throughout Asia have begun to take a firmer approach, with many jurisdictions transitioning from guidance to enforceable requirements.

This is where the context of artificial intelligence (AI) in workplace safety has moved from experimentation to strategic consideration, and we examine this turning point in safety infrastructure, while also looking at its shortcomings.

Regulation quietly turning safety technology into policy

One of the clearest signals that AI-enabled monitoring is transitioning from innovation to infrastructure is the regulatory change introduced across the region.

For instance, in Singapore, the Ministry of Manpower (MOM) took a decisive step by requiring Video Surveillance Systems (VSS) on construction projects where high-risk activities occur, including work at height, lifting operations, excavation zones, and areas with heavy machinery valued at SG$5 million (3,890,747.80) or more since June 2024.

The policy formed a part of the broader Workplace Safety and Health Council framework, which aimed at strengthening oversight and accountability on complex job sites. Alongside the VSS requirement, regulators have increased the maximum penalties for serious safety breaches from SG$20,000 (US$15,560) to SG$50,000 (US$38,900), reinforcing leadership accountability for workplace safety outcomes.

Singapore is not alone in this direction. South Korea’s AI Basic Act, implemented in January 2026, introduces governance frameworks for responsible AI deployment, while Vietnam passed Southeast Asia’s first comprehensive AI law in December 2025.

Across the region, policymakers are shifting from voluntary guidelines toward enforceable frameworks that expect organisations to demonstrate greater transparency and oversight in risk management.

Taken together, these developments point to a broader regional shift — safety technology is no longer viewed purely as operational improvement. It is becoming part of compliance architecture.

From AI cameras to building a cognitive infrastructure

Understanding why regulation is moving in this direction requires looking at what the technology itself is now capable of and how fundamentally it has changed since the first generation of site cameras.

For example, the early generation of digital safety tools focused primarily on recording incidents. Cameras integrated with AI modules captured events, logged documented violations, and reported inspections or accidents that occurred.

The modern AI-enabled systems in 2026 represent a fundamentally different model. Instead of documenting what already happened, they are designed to interpret conditions as they develop.

Computer vision algorithms can monitor scaffolding structures, detect missing guardrails, identify workers operating without harnesses, or track unsafe interactions between forklifts and pedestrians. Sensor networks connected to IoT devices can detect abnormal heat patterns, gas leaks, or environmental conditions that precede fire or chemical hazards.

Large organisations have begun experimenting with this model. Companies such as Intel, Shell, and Komatsu have explored AI-based monitoring and predictive analytics to improve operational safety and asset reliability.

The shift we are witnessing in industrial safety right now is no longer just about experimenting with AI. It is about recognising that modern worksites generate far more risk signals than periodic human supervision can realistically manage. As regulators strengthen oversight and require greater visibility into high-risk activities, technologies capable of continuously interpreting site conditions will inevitably become part of safety infrastructure.

His point speaks to something the regulatory data already confirms — the volume and velocity of risk events on modern worksites have outpaced what traditional supervision models were designed to handle.

The limitations of mandatory safety automation

Despite its promise, AI-driven safety infrastructure is not without its challenges. As adoption grows, organisations are confronting several operational questions that remain unresolved.

One of the most frequently cited concerns is alert fatigue. When monitoring systems generate too many notifications—especially false positives—safety teams can become desensitised, potentially overlooking genuine hazards.

Data governance is another critical issue. Vision AI-based monitoring systems generate significant volumes of sensitive information about workers, site operations, and infrastructure. Ensuring that this data is stored securely and used responsibly is essential, particularly in jurisdictions with evolving data protection laws.

Platforms today align with global worker privacy regulations like General Data Protection Regulation (GDPR) and enhance their safety modules with features like face blurring, anonymisation and client ownership to overcome this issue.

These are not reasons to slow adoption — they are design challenges that organisations must build into their implementation strategy from the outset. The question for 2026 is not whether to deploy AI safety infrastructure, but how to deploy it responsibly.

Why 2026 matters in building an AI-based safety infrastructure

Several forces are converging to make 2026 a genuine inflection point for workplace safety across Asia. Regulators are introducing enforceable digital oversight frameworks. Infrastructure projects are growing in scale and complexity. And the barrier to AI adoption is falling as platforms mature and costs normalise.

At the same time, the stakeholder environment has shifted. Investors, insurers, and regulators are demanding greater transparency in operational risk management — and AI-driven monitoring systems are emerging as the clearest way to demonstrate it.

The transition will not eliminate workplace accidents overnight, and technology alone is never sufficient. But the trajectory is now clear. For organisations operating in advanced regulatory environments like Singapore, the coming years will determine not whether to integrate AI into safety infrastructure, but how effectively that integration is executed.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Why 2026 will be the year AI moves from hype to mandatory safety infrastructure appeared first on e27.

Posted on Leave a comment

Good Friday crypto analysis: Is low liquidity and volume setting up a crypto crash to US$2.17T?

The crypto market’s slight 0.96 per cent retreat to a total capitalisation of US$2.3T over the last 24 hours reflects a broader narrative. Digital assets are no longer operating in isolation. They move in lockstep with traditional finance, and the current macro-driven consolidation proves this integration. The 82 per cent correlation with the S&P 500 is not a coincidence. It signals that crypto now functions as a rates-sensitive risk asset, reacting to global monetary shifts rather than internal blockchain catalysts. This reality challenges the early promise of decentralisation as an independent financial layer and presents an opportunity for those who understand how to navigate the convergence of traditional markets and digital innovation.

Japan’s 2-year government bond yield, which climbed to a 31-year high of 1.385 per cent on April 3, 2026, triggered the latest pressure on risk assets. That move strengthened the dollar and sent ripples through equities and correlated instruments like crypto. I have long argued that monetary policy remains the dominant force shaping asset prices, and this episode reinforces that view. When global yields rise, capital rotates toward safety, and speculative assets face headwinds regardless of their technological merit. Crypto’s reaction here confirms its maturation into the global financial system, but it also highlights a vulnerability. The sector still lacks the insulation that true decentralisation could provide if regulatory frameworks embraced innovation rather than constraining it.

Altcoin weakness compounded the broader market dip. Bitcoin dominance holding at 58 per cent suggests capital remains parked in the flagship asset, and smaller tokens faced disproportionate selling. StakeStone’s STO token is crashing by over 55 per cent due to large holder movements and an imminent token unlock, illustrating how sector-specific stress can amplify in low-liquidity environments. Spot volume declining 5.51 per cent means every sell order carries more weight, dragging the total market cap lower with less resistance. I have seen this pattern repeat during past consolidation phases. When liquidity dries up, volatility increases, and projects with weak fundamentals or concentrated ownership structures suffer first. This dynamic underscores why I advocate for deeper liquidity pools and more distributed token ownership as essential components of resilient Web3 infrastructure.

Also Read: While stocks rally, gold hits US$4,780 and crypto correlation tells a hidden story

The near-term technical picture offers a clear framework for what comes next. The market currently tests the 78.6 per cent Fibonacci retracement at US$2.33T, with a critical swing low at US$2.27T. A daily close below that level could open a path toward the yearly low of US$2.17T. The Fear and Greed Index, sitting at 28, labelled Fear, suggests participants feel cautious but not panicked. That sentiment aligns with a market awaiting direction rather than reacting to fresh catalysts. The SEC’s CLARITY Act roundtable on April 16 represents the next major inflexion point for regulatory sentiment. I have spent considerable time analysing how policy shapes crypto markets, and this event could provide the clarity that institutional participants need to commit capital with conviction. Until then, sideways movement between US$2.27T and US$2.33T appears the most probable path.

Broader market context adds nuance to this crypto-specific view. US equity markets closed on April 3, 2026, for Good Friday, meaning weekly performance reflected Thursday’s close. The S&P 500 ended the week up 3.4 per cent at 6,582.69, the Nasdaq Composite gained 4.4 per cent to finish at 21,879.18, and the Dow Jones Industrial Average rose 3.0 per cent to 46,504.67. Those gains snapped a five-week losing streak, and crypto did not participate in the relief rally. This divergence warrants attention. It suggests that digital assets remain more sensitive to rate expectations than equity momentum, at least in the short term. Asian markets showed strength with Japan’s Nikkei 225 rising 1.28 per cent to 53,135 points and Hang Seng futures trending higher by roughly 0.6 per cent. The 10-year Treasury yield eased slightly to 4.31 per cent, indicating investors continue to weigh recession risks against surging energy costs.

Commodities added another layer of complexity. Brent crude settled near US$109 per barrel while WTI traded around US$111 as of late Thursday, keeping inflation expectations elevated. Gold saw renewed demand, particularly in Singapore, following a sharp earlier drop. Precious metals often serve as a barometer for risk sentiment, and their resurgence hints at underlying anxiety despite equity gains. Political developments further cloud the outlook.

Also Read: The keys to your kingdom: Navigating crypto custody in 2026

The Trump administration’s authorisation of 100 per cent tariffs on certain imported patented medicines introduces new uncertainty into global trade and pharmaceutical supply chains. Geopolitical tensions around Iran and Oman, with reports of a potential protocol to monitor shipping in the Strait of Hormuz, offered a brief hope for de-escalation but left markets monitoring every headline. Corporate news like SpaceX targeting a valuation exceeding US$2T for a potential IPO captures imagination, and such mega-listings also concentrate capital attention away from smaller, innovative projects in both traditional and digital markets.

My perspective on this consolidation phase centres on three convictions.

  • First, crypto’s correlation with traditional markets is a transitional phase, not an endpoint. As decentralised infrastructure matures and regulatory frameworks evolve, digital assets can reclaim their role as independent stores of value and mediums of exchange.
  • Second, liquidity remains the lifeblood of healthy markets. The 5.51 per cent drop in spot volume demonstrates how fragile sentiment becomes when participation wanes. Projects that prioritise deep, resilient liquidity pools will weather volatility better than those reliant on speculative momentum.
  • Third, regulatory clarity cannot come soon enough. The SEC’s April 16 roundtable on the CLARITY Act represents a critical opportunity to establish rules that foster innovation while protecting participants.

Support at US$2.27T must hold to prevent a deeper retracement toward US$2.17T. A break above US$2.33T could signal renewed confidence, especially if accompanied by rising volume and positive regulatory signals. Until then, cautious consolidation appears to be the baseline scenario. I view this period not as a setback but as a necessary phase of digestion. Markets that advance too quickly without solid foundations often correct more severely later. The current pullback allows participants to reassess fundamentals, strengthen infrastructure, and prepare for the next leg of growth. Those who focus on building rather than speculating will emerge stronger when clarity arrives.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Good Friday crypto analysis: Is low liquidity and volume setting up a crypto crash to US$2.17T? appeared first on e27.

Posted on Leave a comment

AI and the art of team building: Lessons for startup leaders

Last week, I sent out a hiring alert. A client I work with needed someone for his startup—creative, active on social handles, good at thinking on their feet, and aware of digital marketing channels. A standard entry-level role with an eagerness to learn and grow her career. No biggie!

Except there was one additional requirement: the candidate would have to be keen enough to learn AI tools and use them in daily operations. Again, not a problem, one would think? Gen Z, after all! They would have adapted to AI like our 14-year-old selves did to the internet.

And here is where it all flipped! I received two responses to the posting.

Get a vibe marketer

One suggestion was to just get a “vibe marketer.” If, like me, you’ve only recently heard the term, here’s what it means: vibe marketing is an AI-powered approach that enables one person to accomplish a lot of what ten specialists can do.

I was quite apprehensive. I’ve seen hundreds of tutorials on it, have learned two or three of those tools, used them to build posts and a video here or there, but to move a startup’s entire executional think-tank to a person who literally knew it all seemed far-fetched.

And the reality? Well, it was! I didn’t get a single CV with vibe marketing credentials—actually, far from it.

Which brings me to my second dilemma.

Also Read: Bridging the skills gap: Tailored L&D programs for cultivating top tech talent in Asia

Recruit an AI-first talent

The other response was to get an entry-level candidate and train her on AI. What seemed great on paper turned out to be not so great in execution. Trainings take time, energy, and work. If you need someone to hit the ground running, then time is a luxury.

Also, taking initiative, being super passionate to learn new things, and doing it while burning the midnight oil—these don’t make it onto the ideal work-life balance checklist. And it’s a tough gig—execute and learn on the side—not for the faint-hearted.

How do we navigate talent and build teams in the age of AI?

AI isn’t just changing how we work. It’s changing what we need to learn, who we hire, and how we build. But in a sea of endless tools and tutorials, the real challenge isn’t adopting AI, it’s anchoring it to what actually matters.

  • Anchor AI training in what matters most

“These are the best of times. These are the worst of times.”

While AI has opened up great opportunities to scale, build, and grow, it has also made it a largely overwhelming environment for talent across ages and experiences. From prompt engineering to agentic AI—there isn’t just one place to go and up-skill. And there is a new tool launched every day.

So, one way to know what to learn is to build an AI strategy that is closest to your business—and then train new and current workforce on the tools most relevant to that, or those likely to create impact in efficiency, operations, or more. Beyond that is just frenzy and noise. Sure, now websites and apps can be made without code—but is that what you need to make now?

  • AI tools are plenty, but impact takes patience

Developing these AI skills will take time.

And the ecosystem needs patience from leaders—founders such as us. Not all tools are perfect. The free versions run out of steam quickly, and there is only so much a startup can pay for expensive AI tools, at least until their efficiency is well established.

Also Read: Future-proofing businesses and talent through technology

This reminds me of COVID-19, when we all thought online was the way consumers would live, breathe, and shop—until we saw all that euphoria die down, and water seek its own level.

AI will perhaps not be quite similar, because the potency of the technology is well established, but the best solutions will rise above the millions of me-toos.

  • Behind every tech stack is a human stack

Lastly, it is important to understand that irrespective of technology and where it takes us, developing and building talent is about human connection and relationships.

And that needs to be at the core of building teams and navigating the new rules of this game. No one will step into expertise without the grind of entry-level jobs. So they may be reshaped, but they are here to stay.

My checklist for this is going to be all about the right attitude. As my mentor advised me, skills can be learned, but attitude is everything.

In the end, hiring in the age of AI isn’t about finding the perfect resume or the most advanced prompt engineer. It’s about spotting curiosity, grit, and a willingness to learn. The tools will keep evolving but it’s the people who are adaptable, open, and grounded who’ll build the most meaningful things with them.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post AI and the art of team building: Lessons for startup leaders appeared first on e27.

Posted on Leave a comment

The heart of innovation: Why human-centric technology requires a cultural foundation

We just secured our first major client for the sustainability tech tool my startup built — validation after years in corporate leadership. Yet instead of triumph, I felt hollow.

My hands shook from exhaustion, not caffeine. This was not my corporate life; this was my reinvention of a post-30-year MNC career, launched amid COVID-19 lockdowns.

That moment crystallised my turning point: ambition without humanity is a dead end.

What was at stake

When I left my global corporate role to explore entrepreneurial opportunities in Singapore and ASEAN, I underestimated the visceral shift. Startups demand ruthless prioritisation when you lack institutional resources.

My first venture, co-building an AIML lifecycle assessment (LCA) tool, imploded despite brilliant technical minds. We had prioritised “hi-tech, low-touch” over humanity. Egos clashed, motivations misaligned, and trust evaporated.

We mastered technology but failed at the fundamentals:

  • Transactional dynamics replacing shared purpose
  • Broken communication despite “smart” people
  • Task obsession that dissolved team cohesion

The cost? My well-being, relationships, and ultimately, the venture itself.

The mind-shift that changed everything

That failure forced brutal honesty. I realised: “Hard skills are overrated. Soft skills build businesses.”

In my next venture (a green-economy investment platform), we flipped the script:

  • Hired for Ikigai, not Just IQ: We prioritised collaborators who shared our purpose – not just technical virtuosos. I specifically reinforced empathy in data interpretation.
  • Designed for trust, not transactions: We instituted rituals like weekly vulnerability check-ins and co-created values. We instituted “No-Meeting Wednesdays” for deep work.
  • Measured humanity metrics: Team health (anonymous pulse surveys) became as tracked as KPIs. Burnout prevention wasn’t soft – it was strategic.

Also Read: Building a more human and engaged workforce in the age of AI

The unfinished journey

The corporate safety net is gone, but the freedom is worth it. I have learned to chase impact sustainably – protecting my mornings for family, outsourcing non-core tasks, and saying “no” to hustle theatrics.

Though I have since pivoted from the earlier failed venture, the lessons stick:

  • Tech enables, but people build. GenAI won’t fix broken trust.
  • Alignment > acceleration. A team rowing together beats solo sprinters.
  • Ambition needs humanity as its compass.Whether you’re reinventing yourself post-corporate life or building a startup: “Don’t let ‘hard skills’ blind you to the soft infrastructure that makes teams thrive.”

Bridge to present: Human-centric tech in the Age of AI

Today, as GenAI and agentic agents dominate headlines, my approach is starkly different from my early “hi-tech, low-touch” misstep. The allure of “sexy tech” has not faded but my North Star has regained more prominence. AI is a tool, not a torchbearer. It must serve human purpose, not eclipse it.

In my current work, this means:

  • Using AI to amplify—not automate—judgment: We deploy tools to handle data-crunching (like market trends or ESG metrics), freeing our team for what truly matters: interpreting insights through empathy, contextual wisdom, and ethical discernment.
  • Guarding against digital drift: We actively resist letting tools dictate pace or priorities. “Speed” isn’t king; clarity of purpose is. Every AI integration starts with: “Does this deepen human connection or dilute it?”
  • Building ethical guardrails: We co-create protocols ensuring AI enhances transparency (e.g., explaining algorithmic biases) and accountability—never replacing hard conversations or trust-building.

The critical shift? We design around human needs first. Tech follows.

Just as I learned that teams thrive on soft infrastructure, I now see: “Human-centric technology isn’t built with code, it’s built with culture.”

We use AI to remove drudgery, not humanity. To spark collaboration, not replace coffees where real trust ignites. And to extend our impact—not outsource our conscience.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post The heart of innovation: Why human-centric technology requires a cultural foundation appeared first on e27.