Posted on

The not-so-quiet AI surge next door: What it means for Southeast Asian startups

For many startup founders in Southeast Asia, the story of artificial intelligence still starts, and often ends, with the United States. From ChatGPT to Gemini, the headlines, research benchmarks, and most widely adopted APIs have largely come from Silicon Valley.

But that framing leaves out one of the most important shifts happening in the global tech landscape: new players across the region, particularly in East Asia, are rapidly emerging as serious contenders in AI innovation, and their progress holds specific relevance for Southeast Asia’s fast-growing startup ecosystem.

This isn’t a political statement, nor is it a call to pick sides. It’s simply a reflection of how the landscape is changing. If you’re building in SEA, and especially if you’re navigating constraints around infrastructure, talent, or cost, there’s increasing value in understanding — not ignoring — the tools, models, and mindsets coming from adjacent ecosystems.

A tighter race, and why that’s good for the region

According to a Q2 2025 global AI benchmark report, Chinese labs have closed what was once a year-long performance gap with leading US players to just under three months. In the latest intelligence index, DeepSeek R1 scores nearly identically to Google’s Gemini 2.5 Pro. While OpenAI’s o3 remains slightly ahead overall, the difference is marginal and shrinking fast.

What matters more than who is ahead today is how quickly competition is accelerating globally, and how that fuels faster releases, broader access, and ultimately, better tools for everyone — including those building in SEA.

Image from Report : State of AI: China, Artificial Analytics

This competitive environment creates a multiplier effect. As labs around the world continue to push boundaries, developers in emerging markets are seeing more choices and fewer barriers. For Southeast Asia, this means more room to experiment, localise, and scale AI-driven products without being locked into a single pricing model or ecosystem.

The value of openness and optionality

One of the most important developments in AI infrastructure recently is the growing availability of open-weight models and lightweight local deployment tools. While some of the most well-known US-based models are hosted on gated APIs, we’re now seeing encouraging signs of flexibility, such as Google’s release of Gemini CLI, an open-source AI agent for local use cases.

At the same time, Chinese players like DeepSeek and Alibaba continue to release open-weight models with full architectures and parameters accessible to developers.

This trend benefits SEA founders in particular. Open-weight models allow startups to fine-tune systems, localise use cases, and deploy in environments where bandwidth, cost, or compliance restrictions make centralised API usage less viable.

Also Read: Circular capital: Inside the closed-loop ecosystem propelling (and distorting) the AI boom

The ability to work with AI on your own terms — without always needing cloud-based inference or costly token usage — is especially powerful in price-sensitive markets or regions with mixed connectivity. It offers startups more control, more experimentation, and more chances to build differentiated local products.

Building with scarcity in mind

Beyond the tools themselves, the way East Asian labs are developing and scaling their models offers useful lessons for Southeast Asia. DeepSeek’s rise is especially illustrative. Founded in late 2023, the lab’s models quickly jumped from an intelligence index of 20 to 68 by mid-2025 — without requiring major architectural redesigns.

These gains came from post-training updates and a resource-efficient “mixture of experts” (MoE)  approach, where only the most relevant parts of the model are activated during inference.

Image from Report : State of AI: China, Artificial Analytics

For SEA startups, this represents a mindset worth mirroring: optimisation over scale, fast iteration over perfection, and practical results over theoretical elegance. Across Southeast Asia, founders often face constraints in compute, data infrastructure, or engineering resources.

Learning from peers who are building under similar pressures — but at scale — can be incredibly valuable. In the long term, it may also shape how regional AI startups architect their own models and platforms for affordability and adaptability.

A shared focus on real-world applications

What makes many of these developments even more relevant to SEA is their grounding in consumer-scale usage. Across China, AI capabilities are being rapidly embedded into everyday products — whether that’s chat apps, productivity tools, e-commerce experiences, or entertainment platforms.

Many of these models aren’t just optimised in a lab, they’re trained and refined through millions of active users.

Image from Report : State of AI: China, Artificial Analytics

Image from Report : State of AI: China, Artificial Analytics

This kind of real-world scale and product integration is especially instructive for SEA startups working on AI-enhanced platforms. Whether you’re building language learning tools in Vietnam, SME automation in Indonesia, or health tech in the Philippines, it’s worth studying how AI capabilities are aligned with specific customer journeys – not just abstract tasks.

These examples can offer SEA founders roadmaps for balancing AI innovation with user trust, simplicity, and utility.

Beyond tools: The exchange of know-how and practice

The value of looking across borders isn’t only about accessing technology — it’s about exchanging knowledge. Many AI labs and founders across East Asia are actively exploring regional collaboration, and Southeast Asia is a natural partner.

The markets are diverse yet digitally engaged, and there’s a growing appetite for AI-powered products that speak to local realities. This opens the door for partnerships, shared research, co-development, and even founder-to-founder mentorship.

Southeast Asia and its neighbours don’t need to be in competition. In fact, one of the region’s greatest strengths lies in its openness to new ideas. SEA founders have long drawn from global influences — whether it’s Silicon Valley’s product frameworks or China’s growth-at-scale tactics. Today’s AI landscape offers yet another chance to blend global thinking with regional nuance.

Also Read: Anthropic data shows businesses use AI to automate, not collaborate

Strategic thinking in a fragmented ecosystem

That said, caution is still warranted. Not every model will be right for every use case. Questions of data privacy, security, and localisation must remain central to any technology decision. But SEA founders are already used to navigating fragmented environments — multiple cloud providers, payment systems, regulatory regimes, and language groups.

AI is simply becoming one more layer of this puzzle. What matters is being intentional about trade-offs, testing broadly, and staying flexible.

The smartest builders in the region will likely be those who treat AI as a multi-sourced, evolving toolkit — one that draws from wherever the best ideas and most practical tools are coming from. Sometimes that will be the US. Increasingly, it will also be from across Asia.

A moment to learn, share, and scale — together

Southeast Asia is still early in its AI journey, but it’s catching up quickly. Digital infrastructure is improving, local talent is on the rise, and startups across the region are already applying AI in creative, high-impact ways. As the global AI race accelerates, founders here should feel confident not just following, but actively participating.

At the end of the day, all founders — regardless of geography — are solving the same set of challenges: building something useful, scaling it sustainably, and staying ahead of change. We may be working in different markets, but the core lessons of speed, efficiency, and adaptability are universal.

Whether you’re in Singapore, Hanoi, Ho Chi Minh City, Beijing, Shanghai, Shenzhen, Bangkok, Jakarta, Kuala Lumpur, Manila, Mountain View or San Francisco, the future of AI will belong to those who are willing to learn from one another.

This article is inspired by the report State of AI: China, Artificial Analytics.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva Pro

The post The not-so-quiet AI surge next door: What it means for Southeast Asian startups appeared first on e27.

Posted on

SEA funding wiped out: Back to 2016 levels after historic slump

The Southeast Asian technology investment landscape experienced a dramatic and prolonged contraction throughout 2023 and 2024, according to a new Cento Ventures report, confirming fears that the region was facing the deepest and longest slump of any major emerging market.

Investment activity has not just cooled to pre-COVID levels but has entirely reset the ecosystem to 2016 benchmarks, effectively setting the region back eight years.

The data from the 2023-2024 period paints a stark picture of the reality of the post-frenzy market. By mid-2023, it became unequivocally clear that Southeast Asia had slumped all the way back to the pre-“unicorn era” investment climate. This extended correction has shattered the notion that the 2017-2020 period represented a ‘new normal,’ revealing it instead to have been a singular funding frenzy.

Also Read: Fintech rebound: Singapore bags US$1.04B, outpaces global peers

Investment inflows began slowing in early 2023. While a temporary surge in early-stage deals accompanied this initial slowdown–leading to the second-highest number of deals ever recorded in the first half of 2023, a full-fledged ecosystem reset arrived by mid-2023. Following this inflexion point, the number of deals nosedived, and investment volumes stabilised at approximately 2016 levels for the remainder of 2023 and throughout 2024. In total, US$8.1 billion was invested across 1,186 deals in 2023–2024.

Comparing the activity year-on-year highlights the severity of the drop: deals fell by 54 per cent in 2023 compared to 2022, and capital invested declined by 11 per cent. The slump deepened significantly in 2024, with capital invested projected to fall by 55 per cent and deals expected to drop by 65 per cent compared to 2022 figures.

Emerging markets leave Southeast Asia behind

The correction is particularly striking when juxtaposed against the performance of peer emerging markets. Southeast Asia was the first emerging market to dip below its 2017-2020 baseline in the second half of 2022, and it has remained stuck below that level for more than two and a half years.

Meanwhile, rival regions, India and Latin America (LatAm), have significantly outpaced the region’s recovery. By the second half of 2024, all emerging regions, excluding Southeast Asia, had considerably rebounded, consistently staying above their 2017-2020 baselines. India and LatAm, in particular, exceeded these baselines by a factor of 2x. In a massive shift of capital flows, India was pulling in four times the capital and LatAm twice as much capital as Southeast Asia by the second half of 2024.

It is worth noting that the pre-slump baseline for Southeast Asia was exceptionally high; the region took more capital in an average half-year period between 2017 and 2020 than either India or Latin America. However, the region is now lagging significantly behind both rivals.

Mega-deals recede to historic lows

The severe decline in large financing rounds was a key driver of the overall capital contraction. The mega-deal category–deals of US$100 million or more–hit historic lows in 2023–2024. The total value of mega-deals during this two-year period amounted to US$2.1 billion, which is less than half of the peak value of US$5.3 billion reached in the second half of 2021 alone.

Mega-deals did not vanish entirely but became highly concentrated in “flight to quality” bets. These deals were often anchored by major local conglomerates or significant digital groups, such as LineMan GXS, ANext, Bolttech, Mynt, and Ascend. Additionally, internal capital injections from tech giants like Sea Ltd, ByteDance, and Alibaba sustained several operations.

Strategic investors also remained active; North Asian financial institutions such as Mizuho and MUFG continued to provide late-stage capital, often concentrating their funding in digital banking ventures.

Also Read: Southeast Asia startup capital falls 21 per cent, lowest in over six years

A few exceptions stood out: eFishery alone absorbed 10 per cent of the total mega-deal funding in 2023-2024 (which was shutdown following allegations of embezzlement), and notably, accounted for the entirety of late-stage digital investment from Middle Eastern funds in the region. Beyond Kredivo, however, most of the mega-deal rounds went to affiliates of larger corporate groups.

Despite these scattered signs of life, mainly reappearing in the Digital Financial Services (DFS) sector, the regional rebound is trailing badly behind its peers. Even in the most optimistic scenario, Southeast Asia is anticipated to be at least a year behind the recovery curves witnessed in India and LatAm.

The post SEA funding wiped out: Back to 2016 levels after historic slump appeared first on e27.

Posted on

AI without the price tag: How fine-tuned LLMs + RAG give you more for less

Artificial Intelligence (AI) has become the cornerstone of digital transformation, enabling businesses to automate tasks, enhance decision-making, and drive innovation. At the heart of this revolution lies Large Language Models (LLMs)—powerful AI systems capable of understanding and generating human-like text.

While LLMs offer tremendous potential, organisations face a critical decision: should they train an LLM from scratch, or fine-tune an existing model and integrate Retrieval-Augmented Generation (RAG) for enhanced performance?

Training an LLM from scratch demands extensive resources, including massive datasets, powerful computing infrastructure, and deep AI expertise. On the other hand, fine-tuned LLMs combined with RAG offer a more cost-effective, scalable, and efficient alternative that allows businesses to harness AI’s power without the heavy upfront investment.

As organisations navigate the complexities of leveraging artificial intelligence, they face a critical dilemma: What is the most effective approach to unlock its full potential and drive maximum impact — build from scratch or optimise?

The rise of domain-specific AI: Why one size doesn’t fit all

AI has evolved significantly from being a broad, general-purpose tool to an increasingly specialised and industry-focused solution. While massive, general-purpose models like GPT-4, Gemini, and Llama are versatile, they often lack the domain-specific expertise that businesses need.

For example, an AI model trained for financial fraud detection requires specialised knowledge of banking regulations, risk assessment, and transaction patterns—something a generic LLM may struggle to provide.

This has led to the growing adoption of fine-tuned AI models, where organisations take a pre-trained LLM and customise it with industry-specific data and knowledge. By fine-tuning a model, businesses can significantly improve accuracy, relevance, and efficiency while ensuring AI aligns with their unique requirements.

Additionally, Retrieval-Augmented Generation (RAG) is emerging as a game-changer, allowing LLMs to fetch real-time data from external sources, ensuring the model’s responses are not limited to static training data.

The true cost of training an LLM from scratch

For companies considering training an LLM from the ground up, the costs and challenges are staggering. Training a state-of-the-art language model requires massive computational power, vast datasets, and a dedicated team of AI researchers and engineers. Estimates suggest that models like GPT-4 cost over US$100 million to develop, requiring thousands of high-performance GPUs and TPUs running for months.

Also Read: Re-skilling in the age of AI and navigating the future of work in Malaysia

Beyond the financial burden, training a model from scratch also demands access to high-quality, diverse datasets. Without extensive, well-curated data, organisations risk producing biased or inaccurate AI models. Even after training, maintaining and updating the model is an ongoing challenge—requiring periodic retraining to keep the AI relevant in fast-changing industries like finance, law, and healthcare.

For most enterprises, the reality is clear: training an LLM from scratch is impractical, costly, and unnecessary. Instead, a more efficient approach is to leverage pre-existing models and customise them through fine-tuning and RAG.

Fine-tuned LLMs + RAG: The smarter, scalable alternative

Rather than reinventing the wheel, organisations can take advantage of pre-trained LLMs and fine-tune them with industry-specific data to align with their business needs. Fine-tuning significantly reduces the time, cost, and computational power required compared to training from scratch, while still delivering a model that is highly accurate and domain-specific.

What makes fine-tuning even more powerful is its synergy with Retrieval-Augmented Generation (RAG). Traditional LLMs rely solely on their pre-trained knowledge, which becomes outdated over time. However, RAG enables AI to retrieve real-time data from external sources, knowledge bases, APIs, or proprietary datasets—ensuring that responses remain accurate and relevant. This is particularly valuable for industries like financial markets, healthcare, and cybersecurity where up-to-date information is critical.

By combining fine-tuning with RAG, organisations can deploy AI that is cost-effective, faster to implement, continuously updated, and contextually aware—without the burden of retraining a model from scratch.

Challenges in implementing LLMs and RAG

While fine-tuning and RAG present a more feasible AI strategy, organisations must navigate key challenges to ensure success. One of the biggest hurdles is data quality and availability. Fine-tuning an LLM requires high-quality, well-labeled datasets, but many organisations lack structured and curated data sources. Inadequate data can lead to biased, inaccurate, or unreliable AI models.

Integrating RAG also adds complexity, as it requires organisations to establish real-time data retrieval pipelines. This often involves using vector databases, which require specialised expertise to set up and maintain. Additionally, businesses must address security and compliance concerns, ensuring that sensitive proprietary data remains protected and does not get exposed through external retrieval mechanisms.

Despite these challenges, fine-tuned LLMs and RAG remain a superior option, especially when organisations invest in robust data strategies, secure deployment practices, and continuous model monitoring.

The business case: Why fine-tuned LLMs + RAG outperform training from scratch

For most organisations, the decision to fine-tune an LLM and implement RAG is a no-brainer. The advantages extend beyond just cost savings—businesses also gain greater flexibility, faster deployment times, improved accuracy, and enhanced real-time capabilities.

Financially, fine-tuning and RAG drastically cut down AI development costs. Instead of spending millions on compute resources, organisations can fine-tune pre-trained models for a fraction of the cost, often using cloud-based AI services like AWS Bedrock, Azure OpenAI, or Hugging Face’s model hub. Deployment is also exponentially faster, with fine-tuned models ready for production in weeks instead of years.

Moreover, the real-time adaptability of RAG gives businesses a major competitive edge. A financial institution can use RAG-powered AI to pull the latest stock market trends, while a cybersecurity team can retrieve real-time threat intelligence. This ability to update AI knowledge dynamically without retraining the entire model makes fine-tuning with RAG the ideal approach for organisations that operate in fast-changing environments.

Also Read: The human touch endures: Why AI won’t replace all blue-collar jobs

How organisations can get started with fine-tuned LLMs + RAG

To maximise AI’s potential, businesses should start by identifying key areas where AI can deliver the most value—whether it’s automating customer service, enhancing research, or improving decision-making. Next, organisations should select the right pre-trained LLM, such as OpenAI’s GPT models, Meta’s Llama, or open-source alternatives like Mistral or DeepSeek.

Once a model is chosen, the fine-tuning process begins. Businesses should gather high-quality, domain-specific data and use supervised fine-tuning to align the model’s responses with their industry requirements. Following fine-tuning, RAG can be integrated to ensure AI has real-time access to external knowledge sources.

Leveraging an integrated, pre-configured AI technology stack like QXLI can significantly reduce costs, minimise the need for specialised resources, and accelerate time-to-market. By streamlining deployment, optimising infrastructure, and automating key processes, organisations can focus on innovation rather than technical complexities, ensuring faster AI adoption and improved operational efficiency.

For deployment, organisations must also consider security, compliance, and scalability. AI solutions can be hosted on-premises or in the cloud, depending on data privacy needs. Finally, continuous monitoring and optimisation should be implemented to ensure AI models remain accurate, fair, and up-to-date over time.

The clear winner in the AI game

While training an LLM from scratch may seem like the ultimate AI achievement, the reality is that it is costly, time-consuming, and impractical for most organisations. Instead, the smarter and more efficient path lies in fine-tuning pre-trained models and integrating RAG, enabling businesses to build highly specialised, real-time AI solutions with minimal cost and effort.

By adopting fine-tuned LLMs with RAG, organisations can achieve unparalleled AI performance, scalability, and adaptability—without the burden of massive infrastructure investments. In the race for AI dominance, organisations that adopt efficient, agile, and real-time AI strategies will not only stay ahead of the curve but also define the future of their industry.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

Image credit: Canva Pro

The post AI without the price tag: How fine-tuned LLMs + RAG give you more for less appeared first on e27.

Posted on

The scaling paradox: Why elite startups abandon their winning formula

Revenue-per-employee just became the only metric that matters—everything else is theatre.

“Scale the signal, not the size. The graveyard is full of startups that confused growth with bloat.”

Here’s what keeps venture partners up at 3 AM: Goldman Sachs needs 49,000 bodies to generate what Apple does with surgical precision per worker. So why do our sharpest startups morph from lean execution machines into bureaucratic nightmares the second Series A money hits the bank?

Check your cap table, then check your payroll. Which one still has ammunition for the next pivot? If that answer takes longer than five seconds, you’ve already loaded the gun pointed at your own runway.

The numbers don’t lie, they just kill

Sources: SEC filings, acquisition reports, PitchBook

Apple cranks out US$2.38 million per employee. Meta delivers US$2.19 million. Nvidia hits US$2.06 million. These aren’t vanity numbers—they’re proof that peak performance demands precision, not padding.

Instagram sold to Facebook for US$1 billion with just 13 employees—that’s US$77 million per person. WhatsApp’s 55-person team served 900 million users when Facebook acquired them for US$19 billion. Meanwhile, your Series-B peers are burning US$2M supporting 212 people to serve maybe 275,000 users.

Death by committee

CB Insights autopsied 966 startup corpses from 2020-2024 and found the same cause of death: premature scaling beat out pricing screw-ups, customer churn, and regulatory disasters. TechCrunch reported 966 shutdowns in 2024 versus 769 in 2023, and the pattern is clear. The casualties aren’t companies that never found product-market fit—they’re the ones that found it, then hired themselves to death.

The math is brutal. A US$2M runway keeps five elite performers fed for 24 months. Same cash with 20 average hires? 6 months, maybe less. Revenue per head crashes from US$400K to US$100K overnight.

Also Read: AI integration field notes for tech startups and scale-ups: Software engineering, product, and beyond

The physics of bureaucracy

Meta-analysis of 3,200 firms shows every new hire cuts total output by 0.9 per cent, with complexity growing exponentially—not linearly. This isn’t poor management—it’s physics.

UCLA tested this with LEGO assembly: two-person teams finished in 36 minutes while four-person teams needed 56 minutes. Panasonic’s own factory data shows productivity falls off a cliff after 50 workers per line.

Forbes surveyed 1,842 founders and found the decision paralysis that kills startups :

92 days is a full product cycle. While 100-person startups debate features, 10-person competitors ship, test, and iterate.

The institutional amnesia problem

Professor Jennifer Mueller at UC San Diego calls it “relational loss”—the support individuals feel as teams balloon. But there’s something deeper happening: institutional amnesia.

Winners forget they’re supposed to disrupt incumbents, not become them. Dropbox scaled from 4 employees to IPO by maintaining extreme focus on core product rather than building departments. Zoom’s Eric Yuan kept the company lean through US$100M ARR by prioritising engineering excellence over organisational complexity.

The consultant hack

McKinsey studied autonomous teams and found something counterintuitive: external expertise crushes internal mediocrity :

Triple the quality, 40 per cent of the cost, 2.6x faster. Smart startups get this. Instead of building marketing departments, they hire world-class agencies for campaigns. Instead of legal teams, they partner with top firms. They buy results, not résumés.

Stripe famously used this approach through Series B, keeping headcount under 50 while processing billions in payments by leveraging best-in-class partners for compliance, fraud prevention, and regional expansion.

The disruption paradox

If IBM’s 300,000 employees, Google’s 190,000, and Microsoft’s 220,000 could out-innovate small teams, venture capital wouldn’t exist. These giants would own every breakthrough, every disruption, every advance. Startups would be irrelevant.

But startups win through speed, precision, and agility. You don’t beat a 1,000-piece orchestra by building a 1,001-piece orchestra. You beat it with three snipers who know exactly where to aim.

Gallup found that 42 per cent of employees at sub-10 companies report engagement versus 30 per cent at larger firms. The Ringelmann Effect proves individual output drops as group size grows—researchers call it “social loafing”.

The vanity metric trap

Startups obsess over headcount as a growth signal, revealing a fundamental misunderstanding of value creation. ScienceDirect research confirms companies optimising revenue per employee significantly outperform headcount optimisers.

PitchBook analysed 1,100 tech exits and found revenue-per-employee optimisers outperform headcount growers by 240 per cent in exit multiples.

Consider the contrast: Snapchat reached 100 million daily users with under 100 employees, while traditional media companies needed thousands to serve smaller audiences. Signal serves 50 million users with fewer than 50 employees, proving that encrypted messaging at scale doesn’t require enterprise headcount.

Also Read: From pilot to scale: Why traditional VC metrics don’t work for climate deep tech

The US$2M choice

Every seed extension forces a decision:

Each extra hire is equity that can’t go toward product, pricing, or distribution. Reid Hoffman warned Stanford students in 2024: “Hiring wrong at the wrong time is the fastest death sentence in venture. There’s no second product-market fit”.

The survival framework

The next decade’s winners won’t scale fastest—they’ll scale smartest :

  • Hire top one per cent talent, not average performers
  • Track revenue per employee, ignore headcount
  • Buy expertise externally, not loyalty internally
  • Build systems before adding bodies
  • Resist the “we look bigger” vanity play

The mirror question

Tape this to your monitor: “If Goldman needs 49,000 people to match Apple’s margins, why are we copying Goldman?”

When that note falls off, so does your runway.

Every successful startup faces the same choice: keep the lean culture that created initial wins, or join the 90 per cent that scale into oblivion. Small teams already work—WhatsApp, Instagram, and Stripe proved that.

The real question is whether founders have the nerve to ignore conventional wisdom long enough to build something that lasts.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva Pro

The post The scaling paradox: Why elite startups abandon their winning formula appeared first on e27.

Posted on

From automation to Agentic AI: Trust starts at the data layer

When I first started my career in the early 2000s, I was given what felt like a painfully mundane task: enter postcodes into an internal business website, extract data on telephone exchanges, and type it into a spreadsheet to design metropolitan networks.

Being the bright-eyed graduate I was, I decided there had to be a better way. Over one weekend, I wrote a small Visual Basic app that took postcode inputs from the spreadsheet, screen-scraped results from the website, and parsed the data back. What once took two weeks could now be done in five minutes. That small hack freed me to spend more time with customers, and it helped win millions of dollars in deals. It also earned me a stern lecture from the IT team, who saw my tool as a denial-of-service risk.

That was my first taste of automation. Eliminating repetitive work did not just save time. It shifted where I could add value.

The rise of no-code automation

Fast forward to today, and we have tools like Zapier and n8n. Instead of writing code, anyone can drag and drop “if this, then that” rules to automate workflows. APIs that once needed developers are now accessible through simple interfaces. Internal processes that used to take hours can be chained together in minutes.

Once you have tasted automation, the next question comes quickly: Can we do more? Can we make these flows less brittle, more adaptive, even self-healing when something changes? Can we move beyond scripts toward systems that understand goals and figure out the steps themselves?

That is where I see agentic AI entering the conversation. It feels like my childhood dream of a Star Trek computer coming to life.

What agentic AI really means

Agentic AI refers to tools that do not just follow rules but act autonomously to achieve objectives. The leap is from executing tasks to making decisions. At its core, agentic AI is:

  • Goal-oriented: An agent decides what actions to take to achieve an outcome. Example: “Reduce overcrowding by 15 percent this year” might lead it to adjust escalator directions, modify turnstile access, or recommend alternative exits.
  • Adaptive and flexible: If one method fails, it explores alternatives instead of stopping with an error code.
  • Reasoning-capable: It weighs trade-offs, infers missing information, and runs simulations on historical data to propose better steps.
  • Autonomous: It runs continuously and can initiate workflows without explicit human triggers. A train delay, for example, could prompt an agent to adjust signage and entry gates to prevent bottlenecks.
  • Multi-tool orchestration: It selects and connects tools across systems, even ones you did not hard-code to work together.

If automation is about rules, agentic AI is about goals.

Also Read: The digital lag: How traditional consulting is failing to grasp the agentic AI revolution

The data bottleneck

For all the excitement, one truth remains: an agent is only as good as the data it receives.

Take smart cities. Train stations are busy, complex hubs that are prone to overcrowding and often need manual intervention when something goes wrong. CCTV is everywhere, but cameras have blind spots, struggle in low light, and cannot reliably capture depth. The result is incomplete, sometimes misleading data.

At Curium we deployed LiDAR sensors across European stations to provide accurate insights into footfall, crowding, and hazards. An investor once asked me, “So what? Is this just a nice to have?” The question stuck with me. Not everyone sees why LiDAR adds value, or how agentic systems depend on high-fidelity baseline data.

When you add LiDAR to CCTV, the picture changes. You get full coverage, depth perception, and real-time accuracy. That coherent data unlocks predictive analysis: what is normal, what is not, and how conditions are evolving. With agentic AI, you can then test interventions dynamically, shift escalator directions, close entry gates, reroute passengers, and watch outcomes in real time.

What changes in practice

Forward-thinking operations managers are starting to see the opportunity. With richer data, agentic AI moves from buzzword to practical decision support. It is not only about efficiency. It is also about safety and resilience.

Imagine:

  • Highway systems that adjust traffic lights and lane allocations dynamically.
  • Public transport hubs that respond to surges before they spill into dangerous overcrowding.
  • Driverless systems that adapt in real time to disruptions.

These are actual early signals of how work, infrastructure, and customer experience will be reshaped.

What leaders should pay attention to

The implications go beyond technology. Teams will shift from doing tasks to overseeing autonomous systems. Culture will need to balance trust in agents with accountability when mistakes happen. Regulators will ask the hard question: if an AI agent makes the wrong call, who is responsible?

Practical steps to take now:

  • Start small: Pilot in low-risk domains with clear success criteria.
  • Design for oversight: Keep a human in the loop, define escalation rules, and log decisions for review.
  • Harden the data layer: Invest in sensor quality, data validation, and observability before you attempt autonomy.
  • Focus on customer trust: Efficiency gains mean little if people do not feel safe relying on the system.

We are already seeing first steps in Singapore with fixed-route autonomous shuttles in Punggol. The lesson is simple: the path to autonomy starts with reliable data and measured rollouts.

Also Read: Agentic AI in action: How Southeast Asia’s startups are turning constraints into strengths

Trust starts from the baseline

I often show a simple pyramid when pitching autonomous systems. At the base is sensor data, above it sits automation and agentic AI for perception and decision making, and at the top is public trust.

If the base is weak, the whole pyramid crumbles. Faulty data leads to flawed decisions, which lead to failures that erode trust. Get the basics right: high-fidelity data acquisition and validation, and the rest of the pyramid can stand.

Public perception ultimately rests on outcomes: did the car crash, was the station overcrowded? By building robust data foundations before handing more authority to AI agents, we stand a better chance of not only achieving success but also earning lasting trust in the systems we build.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva Pro

The post From automation to Agentic AI: Trust starts at the data layer appeared first on e27.