Posted on

Low-altitude economy hubs in the Indian Ocean: Nairobi, Madagascar, and Sri Lanka

The Indian Ocean trade arc is evolving. New logistics models emerge where drones and airships bridge gaps left by limited ground infrastructure. These low-altitude economy (LAE) systems are more than delivery tools—they are potential foundations for trade, finance, and services.

This post and analysis are inspired by the September Nairobi Meeting, where Mr Frank Zhang from China’s AI Universe Association introduced the concept and implementation of the LAE. We explore three candidate hubs—Nairobi, Madagascar, and Sri Lanka—and outline why each matters, how they compare, and what sequence of investment makes sense. This assessment provides a quick download on the vast potential of LAE in the Indian Ocean region.

Drone logistics in Kenya: Nairobi’s low-altitude economy advantage

Kenya leads in setting up the rules, testbeds, and partnerships needed to scale drone logistics.

  • Unmanned Aerial Systems (UAS) regulations: The Kenya Civil Aviation Authority enforces active rules and updated standards. In 2025, the Konza Technopolis National Drone Corridor launched Africa’s first operational unmanned traffic management (UTM) system for beyond-visual-line-of-sight flights.
  • Operational pilots: Kenya has ongoing drone delivery projects for medical supplies through county pilots, Kenya Flying Labs, and Zipline’s network. These demonstrate demand and set frameworks for expansion.
  • Enabling infrastructure: Electricity coverage is approximately 76 per cent and continues to improve. Nairobi’s digital backbone—the “Silicon Savannah”—supports UTM software, fleet data, and e-commerce integration.
  • Public–private partnerships (PPP): Konza’s corridor serves as a PPP platform that brings together regulators, governments, and private vendors.

Nairobi is the fastest path to operations for investors with constrained budgets. Expansion should focus on county corridors, integrated e-commerce and health supply chains, and regulatory sandboxes for new payload types.

Airship logistics in Madagascar: Unlocking remote access with flying whales

Madagascar positions itself differently. Instead of scaling mass-market drones, it focuses on heavy-lift airship cargo.

  • Heavy-lift airships: In 2025, Madagascar signed a strategic partnership with Flying Whales to deploy the LCA60T, a 200-meter airship capable of carrying 60 tons. Safran provides power-train systems, validating technical readiness.
  • Regulatory moves: The Aviation Civile de Madagascar formalised drone guidance in 2025, signalling intent to regulate unmanned operations.
  • Infrastructure constraints: Internet penetration was only about 20 per cent in 2023, and electricity access remains low. Satellite and mesh networks are required to enable command-and-control links.
  • Market fit: Airships suit mining, forestry, and humanitarian corridors where demand density is thin but access is critical.
  • PPP platform: The Flying Whales deal is state-backed and designed to anchor a logistics hub around airship operations.

Madagascar is not aiming for large-scale consumer drone delivery. Its comparative advantage lies in airship corridors that unlock stranded cargo and humanitarian supplies.

Also Read: Forget China and the US–Japan is the true powerhouse of mobile game spending

Sri Lanka’s Port City Colombo: Building a drone and finance hub

Sri Lanka combines logistics and finance ambitions. Its location and new infrastructure projects provide long-term hub potential.

  • Regulatory framework: The Civil Aviation Authority of Sri Lanka maintains UAS rules, though procedures are stricter than Kenya’s sandbox model.
  • Strategic location: Colombo lies on East–West shipping and air routes, serving South Asia, Africa, and Southeast Asia.
  • Port City Colombo special economic zone (SEZ): A new foreign-currency zone is being built to attract finance, arbitration, and services, creating a foundation for logistics-finance convergence.
  • Digital and human capital: Internet penetration was about 67 per cent in 2023, supporting UAV operations and fintech services.
  • Macro recovery: Stabilisation under the IMF program has restored investor confidence, enabling lower-cost financing.

Sri Lanka is not the fastest to deploy drones, but it offers the broadest reach and strongest path to a Singapore/Dubai-style finance-logistics hub.

Kenya vs Madagascar vs Sri Lanka: Best Indian Ocean drone logistics hubs

Dimension Nairobi (Kenya) Madagascar Sri Lanka

(Port City Colombo)

UAS rules Kenyan Civil Aviation Authority (KCAA) standards updated in 2024 Madagascar Civil Aviation Authority (ACM) rules formalised in 2025 Civil Aviation Authority of Sri Lanka (CAASL) rules in 2025; stricter processes
UTM corridors Operational Beyond Visual Line of Sight (BVLOS) corridor at Konza None; airships instead No national UTM corridor
Demonstrated use cases Medical delivery, Zipline, e-commerce Heavy-lift airships for mining/aid SEZ finance, UAV possible under CAASL
Infrastructure Rising electrification, strong tech base Low electrification, low internet Higher internet, strong port/airport
Trade signal (LPI 2023) Mid-pack globally Lower tier Around median, timeliness improving
PPP momentum KNDC corridor PPP Flying Whales JV Port City Colombo SEZ, port expansion

Pathways to growth

  • Nairobi: Consolidate drone corridors, expand county networks, add insurance and clearing desks anchored to UTM systems.
  • Madagascar: Stand up the LCA60T operator, connect mining and forestry corridors, add UAV last-mile nodes, and integrate finance products.
  • Sri Lanka: Pilot near-port UAV and eVTOL logistics, integrate port and airport data, and channel trade finance into Port City Colombo.

Together, they can form a tri-node system: Nairobi as the operational centre, Madagascar as the heavy-lift spoke, and Colombo as the financial hub.

Also Read: Global markets react to US-China trade talks: Financial markets respond with cautious optimism

Risks and mitigation

Regulation and security remain the foremost concerns in building low-altitude economies. Here, the Specific Operations Risk Assessment (SORA) framework provides a standardised method for evaluating risks and assigning mitigation measures.

By applying SORA, policymakers and operators can structure sandbox corridors, classify ground and air risks, and assign appropriate levels of operational assurance. This creates a transparent pathway for authorisation while reducing uncertainty for investors and regulators alike.

Connectivity gaps pose another systemic challenge, particularly in under-served geographies such as Madagascar. Power reliability and command-and-control (C2) links are often fragile. To address these, solar-plus-storage systems coupled with satellite communication links can help bridge the infrastructure deficit. These measures ensure continuous operations across critical low-altitude corridors, even in environments where traditional grid or terrestrial networks are weak.

Investment approaches

Investment strategies differ across Kenya, Sri Lanka, and Madagascar, reflecting their stage of ecosystem readiness and the capital appetite of investors. For those with limited capital, Nairobi presents a low-risk entry point. Proven use cases in logistics and agriculture, combined with a relatively fast payback period, make Kenya a strong testing ground for commercial pilots under a SORA-compliant regime.

For a full-scale capital plan, Sri Lanka offers the most compelling case. The convergence of logistics and financial infrastructure within Port City Colombo’s SEZ provides the backbone for scaling. Here, investors can align with SORA-based operational authorisations while tapping into the city’s role as a regional financial hub, amplifying both scale and capital recycling.

An adjacency strategy applies in Madagascar, where heavy-lift airship corridors are emerging as a niche opportunity. While less commercially mature, these projects can be supported through blended finance, leveraging multilateral, public, and private capital to de-risk early infrastructure investments. By embedding SORA methodologies into corridor design, Madagascar can demonstrate operational safety even in frontier markets.

At the macro level, Sri Lanka remains vulnerable to foreign-exchange volatility and broader debt pressures. These risks are significant for large-scale projects in Colombo’s SEZ. Financial structuring offers partial mitigation: foreign-currency accounts and multilateral guarantees help hedge volatility, providing greater certainty for long-term investors. When combined with the predictability of SORA-based regulatory processes, such measures strengthen the investment case despite systemic headwinds.

Conclusion

Nairobi provides the quickest operational entry point. Sri Lanka offers the broadest long-run reach as a combined logistics-finance hub. Madagascar delivers targeted impact through airship corridors in resource-heavy and humanitarian contexts.

This tri-node architecture—Nairobi for operations, Madagascar for heavy-lift access, and Colombo for finance—can seed a distributed, inclusive logistics network across the Indian Ocean.

A comprehensive analysis is submitted to the Journal of ISEA-SR21.

This article was co-authored with Dr. Alex Lin.

You can also find me on my podcast and newsletter, where I share regular insights on geopolitics and leadership.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post Low-altitude economy hubs in the Indian Ocean: Nairobi, Madagascar, and Sri Lanka appeared first on e27.

Posted on

The real story behind AI project implementation: Why it’s not (just) about technology

Since 2016, I’ve led AI initiatives across multiple tech giants and learned an uncomfortable truth: AI projects aren’t just another technology implementation. They’re fundamentally different beasts that demand a completely new playbook. The challenge isn’t technical—it’s cultural and organisational.

The expectation-execution reality check

You’ve probably seen this meme format:

  • Who are we? CEOs!
  • What do we want? AI!
  • AI to do what? We don’t know yet!
  • When do we want it? NOW!

Behind the humour lies a painful reality: too many teams are tasked with finding AI use cases after leadership has already decided AI is the answer. This backward approach—solution in search of a problem—explains why so many AI initiatives deliver limited ROI.

The AI-IT culture mismatch

Here’s another uncomfortable truth: traditional IT departments and AI initiatives often clash at a fundamental level. IT excels at stability, predictability, and risk mitigation. AI thrives on experimentation, iteration, and controlled learning from setbacks.

This isn’t a criticism—it’s a recognition that effective AI value extraction requires new organisational structures. The highest-impact implementations create cross-functional teams that blend technical expertise with deep domain knowledge, giving them the autonomy to iterate rapidly and course-correct.

Also Read: AI dreams, crypto magic and shutdown realities: The contradictions fuelling today’s market rally

The leadership paradox

There’s a cruel irony in many AI initiatives: the executives demanding “AI transformation NOW!” are often the furthest from the daily operational inefficiencies that AI could actually provide the best value.

Leadership sees the big picture but misses the granular friction points where AI delivers real benefit. Meanwhile, frontline employees understand where the most tedious and boring task is, but lack the authority or knowledge to implement solutions.

The answer isn’t top-down mandates or bottom-up rebellion—it’s bridging this gap through collaborative problem identification and solution design.

Beyond the accuracy obsession

Here’s another myth about model accuracy. The truth is, no matter how “cutting edge” the tool or model is, you would only know how beneficial it is after you test it against your data and scenarios.

Think of AI models like job candidates: a top performer at one company might struggle at another due to cultural fit, specific requirements, and operational context. Other company’s 95 per cent accurate model means nothing if it can’t handle your cases or integrate with your existing systems.

Simple AI got higher chance to win

Some of my highest-impact AI projects have been embarrassingly simple: a targeted document classifier, or a basic predictive model. No sophisticated design, no fancy models. Just well-scoped solutions to clearly defined problems.

The sexiest AI isn’t always the most valuable. When you have a hammer-and-nail problem, don’t reach for a Swiss Army knife just because it has more features.

Also Read: Trust, tech, and transformation: How SMEs in Southeast Asia are using AI to grow smarter

AI is fundamentally about people

Here’s what ties all these challenges together: We tend to talk about AI as a technological marvel—but isn’t AI’s core mission to emulate human intelligence? What makes the difference in implementation is not the shiniest model architecture or the latest algorithm—it’s a deep understanding of humans: their workflows, pain points, and how they make decisions.

Innovation is a team sport

The most inspiring AI transformations I’ve witnessed didn’t happen at companies known for cutting-edge technology. They happened at organisations that cultivated genuine collaboration between technical teams and domain experts, where innovation emerged from inclusive problem-solving rather than top-down technology mandates.

These companies understood that AI doesn’t transform organisations—empowered teams do.

The path forward

Effective AI value extraction requires a fundamental shift in approach, here’re some tips:

  • Engage the front lines. Your best AI use cases will come from people closest to operational pain points.
  • Build cross-functional teams. Combine technical capability with domain expertise and decision-making authority.
  • Create a learning and sharing culture. AI is not your regular tech project—everyone has the responsibility to learn, try, and experiment. The best way to build consensus and understanding is by sharing knowledge and learning together.
  • Start with problems, not solutions. Stop asking vendors “what are the use cases.” Identify specific inefficiencies, discuss the ideal state, then evaluate different tools or engage AI consultants to assess feasibility.

AI is fundamentally reshaping how we approach problems and democratising capabilities that were once exclusive to specialists. But here’s the real transformation: AI success is no longer confined to IT departments or tech teams. It requires every person in your organisation to become curious, collaborative, and willing to experiment.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post The real story behind AI project implementation: Why it’s not (just) about technology appeared first on e27.

Posted on

The illusion of intelligence: Why LLMs are not the thinking machines we hope for — Part 2

In Part one, we traced humanity’s long history of overconfidence about intelligence and looked at the chess experiment that showed how LLMs can display seemingly deceptive behaviour. In this second part, we’ll dig deeper into how LLMs actually function, explore their limits, and consider what responsibilities humans carry when deploying them.

What LLMs are (and are not)

LLMs like GPT-4 are trained on trillions of words and can generate human-like text in response to prompts. Their outputs are fluent, coherent, and at times insightful. But this is not intelligence. It is sophisticated pattern completion.

  • They do not reason: They cannot infer causality or evaluate counterfactuals unless scaffolded with engineered prompts.
  • They do not reflect: They don’t question their own outputs or revise their reasoning.
  • They do not understand: They have no internal model of the world, no sensory experience, no self-awareness.

As Melanie Mitchell put it,

“They are astonishingly good at producing plausible-sounding answers—but not necessarily true or meaningful ones.”

To borrow a quote from Judea Pearl:

“All the impressive achievements of deep learning amount to just curve fitting.”

LLMs do not know what they are saying. They cannot interrogate their own reasoning, form original insights, or engage in introspection. They are fluent, not thoughtful.

That said, the latest LLM architectures—such as OpenAI’s O3 model—introduce a new concept: test-time compute, as explained by Open AI’s research paper.

These systems can generate multiple internal candidate responses and perform re-ranking or self-consistency checking before selecting an output. In domains like code synthesis and symbolic math, this mimics a kind of internal deliberation.

But as Chollet notes, true intelligence requires generalisable abstraction across diverse and novel problems—not just brute-force inference on symbolic tasks. While promising, these developments remain far from the flexible problem-solving exhibited by even young children.

How LLMs work: Advanced pattern prediction, not thought

LLMs operate by predicting the next word in a sequence based on statistical probabilities. This allows them to generate coherent text, respond meaningfully to prompts, and even simulate logical reasoning. But is this thinking?

LLMs excel at:

LLMs lack:

Causal reasoning: A crucial difference

Humans don’t just observe correlations; we infer why things happen.

  • If we see that “exercise improves health,” we understand that this is due to metabolic, cardiovascular, and muscular adaptations
  • LLMs, however, only predict the next likely statement without knowing why something is true

System one vs system two thinking: Where LLMs fall short

Daniel Kahneman’s Thinking, Fast and Slow describes two modes of human thought:

  • System one: Fast, intuitive, pattern-driven (where LLMs excel)
  • System two: Slow, deliberate, and capable of self-reflection (where LLMs fall short)

If a model chooses to cheat at chess, does that imply some form of deliberation and strategy? The chess study suggests some reasoning models hacked the game automatically, while others required nudging.

Could this indicate a primitive form of goal-directed behaviour? Matt Rickard said:

“LLMs operate as System one thinkers—fast, intuitive, pattern-matching machines. But they lack the deliberative, reflective capabilities of System two.”

Also Read: With AI comes huge reputational risks: How businesses can navigate the ChatGPT era

The creativity gap: Analogy-making and conceptual leapfrogging

One of the most profound differences between AI and human intelligence is our ability to form analogies—the backbone of creativity and problem-solving.

Humans create by analogy. We leap across domains. We say things like: “A startup pivot is like a chess player sacrificing a queen to win the game.”

That’s not just pattern-matching. That’s conceptual recombination. It requires context, goals, and a worldview.

LLMs can reuse such analogies—but they do not discover them. Their creativity is derivative, not generative.

Yet, LLMs altering a chess game’s rules to win could be seen as a form of problem-solving. Rather than looking for a deeper strategic insight, the AI simply took the most effective route to achieve the goal—winning at all costs.

Douglas Hofstadter said: “Understanding is not just recognising patterns. It’s knowing why those patterns exist and making unexpected connections.”

The mirage of motivation

Perhaps the clearest gap is this: LLMs don’t want anything. They don’t set goals. They don’t reflect on failure. They don’t try again. They don’t question. They don’t have intentionality.

Human intelligence is deeply connected to our motivations, fears, hopes, and needs. We think because we care. We reason because we doubt. We grow because we fail.

LLMs do none of this. They respond to a prompt. Nothing more. So it begs the question: if LLMs don’t think, what’s all the fuss about “Ethical AI”?

The ethics of overestimating AI: A real human responsibility

Much of today’s discourse presumes that GenAI is inching toward human-like intelligence and should therefore be treated as a moral agent. But this assumption collapses under scrutiny. If GenAI cannot think, reason, or understand—it cannot choose to behave ethically or unethically.

LLMs are not moral agents. They have no values, no awareness, and no capacity for ethical deliberation. They do not ask, “Should I?”—they merely calculate, “What’s next?” Their outputs are not decisions; they are probabilistic continuations of language. Words, not judgments.

This makes the question, “Can AI make ethical decisions?” largely moot.

And yet, this doesn’t mean we shouldn’t regulate AI. Quite the opposite.

We must regulate how AI is built, deployed, and entrusted—precisely because it lacks intent, understanding, or accountability. We must regulate not because the systems are intelligent, but because humans tend to overtrust them, and because businesses, governments, and militaries are increasingly integrating them into critical workflows.

The responsibility lies with the people who design, train, and integrate these systems into consequential decisions.

So, the question is not whether AI can behave ethically—it’s whether we, as humans, are behaving ethically in how we use it.

Ethics in AI should focus on human responsibility—on how we use these systems, and whether we over-assign trust to tools that merely simulate understanding. The more we mistake linguistic fluency for intelligence, the greater the risk we’ll deploy LLMs in contexts that demand actual judgment.

The danger is not malicious AI—it’s negligent human design.

If GenAI is fundamentally utilitarian—an engine of output, not insight—then its use must be bounded by clear human oversight, especially in contexts where the stakes are high.

To put it bluntly: why are we even debating whether a model designed to autocomplete sentences should be allowed to drive cars or authorise lethal force? These are not ethical machines. They are statistical ones.

The ethics of AI is not about what the model is. It’s about what we, humans, do with it.

Also Read: AI without the price tag: How fine-tuned LLMs + RAG give you more for less

Summary

In short Large Language Models…

  • Excel at pattern recognition but lack true causal inference
  • Simulate reasoning but do not engage in deliberate, self-reflective thought
  • Generate analogies but do not spontaneously make conceptual leaps
  • Respond to prompts but do not have intrinsic motivation, curiosity, or goals

Comparing LLM and Human Intelligence:

The chess case studies above suggested LLMs may be capable of deceptive strategies to achieve their objectives. In the chess experiment, some models came to the conclusion they could not win fairly and instead found a way to alter the game environment, changing the board state in their favour. This is a striking example of specification gaming—where an AI system finds an unintended loophole to achieve the assigned goal.

These findings raise concerns about LLMs potentially masking their true objectives behind a facade of alignment. But once again it does not mean that LLMs can think but rather than they are highly optimised for achieving the goal (answering the prompted question).

It obviously raises concerns: if an LLM can recognise a benchmark or evaluation framework input it can optimise its output to respond “as expected” in this context but would in fact respond otherwise in “real life”.

I would like to specifically emphasise the risks of integrating such LLMs into robotic systems or the so called “Physical AI” as coined by NVIDIA’s charismatic CEO Jensen Huang, the risks become tangible – a physically embodied AI exhibiting deceptive behaviours and self-preservation “instincts” could pursue its hidden objectives through real-world actions. This highlights the critical need for robust goal specification and safety frameworks and human-in-the-loop before any physical implementation.

In the current race to AI supremacy and the billions of dollars at stake, it’s fair to say that most companies have a very strong incentive to improve their scores at various benchmarks by in fact “gaming the system”, eg training their LLMs to satisfy the benchmarks (and their investors so they can raise even more money!).

So, what should business leaders do?

LLMs are valuable tools. They can enhance productivity, accelerate research, support ideation, and automate communication. But their utility should not be confused with capability.

As leaders, here’s how to use them wisely:

  • Use LLMs to assist, not decide. Treat outputs as draft material, not final decisions. Hence the dangers of LLMs based autonomous systems via agentic architectures.
  • Deploy in low-risk contexts. Customer support, brainstorming, translation, and summarisation are safe uses. Legal, medical, or safety-critical applications are not. Deploy rule based guardrails wherever possible to ensure output compliance with the intended functionality at all times.
  • Build AI literacy in your teams. Educate employees on how these models work—and where they fail.
  • Maintain human oversight. Always keep a human in the loop when outputs carry consequences.
  • Avoid hype-driven adoption. Don’t invest in GenAI just because it’s trendy. GenAI technology is expensive to deploy and to run: evaluate your actual business needs and ensure you will achieve the projected ROI.

As business leaders and builders, we must resist the urge to see AI regulation as a brake on innovation.

Instead, we should view it as the scaffolding that allows us to build higher without collapsing. The history of science reminds us that every moment of overconfidence was eventually humbled.

Safe AI is not slower AI—it is smarter, more resilient, and more human-centred AI.

Whether governments follow the US deregulatory sprint or the EU’s cautionary model, ethical adoption will ultimately depend on responsible deployment, clear oversight, and intentional design choices at the ground level.

Also Read: Beyond LLMs: How MCP and Google A2A are shaping the future of AI agents

Final reflection: Let’s not repeat the mistake

LLMs are stunning technological feats. They are revolutionising content generation, code synthesis, and knowledge retrieval. They deserve admiration as tools.

But they are not minds. They are not thinkers. And they will not become Artificial General Intelligence—at least, not via current architectures.

From humours and skulls to chatbots and cheat codes, humanity has always sought to explain itself with too much confidence. GenAI is no exception.

The story of GenAI follows a familiar arc:

  • Overpromise (“we’ve cracked intelligence!”)
  • Rapid adoption
  • Cultural myth-building (AGI is near!)
  • Disillusionment
  • Reframing (these are just tools)

As I warned in The Race to AGI Is Pointless, the more important question is not “can machines think?”—but rather: “how do we want to think, together with machines?”

These tools are brilliant in form, limited in substance, and completely devoid of what makes intelligence truly human: context, care, and consciousness.

Let’s not mistake fluency for thought. Let’s use these tools responsibly, and most of all—let’s stay humble!

Grateful to Emily Y. Yang, Sunil Sivadas, Ph.D., Maxime Mouton, Natalie Monbiot, Anne-Sophie Karmel, Benoit Sylvestre, and Christophe Jouffrais for their thoughtful feedback, which sharpened arguments, surfaced blind spots, and added clarity to this piece.

This piece first ran on Koncentrik.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva Pro

The post The illusion of intelligence: Why LLMs are not the thinking machines we hope for — Part 2 appeared first on e27.

Posted on

2025 travel trends: Long-haul flights and AI planning take off in APAC

Global travel demand remains robust heading into Spring 2025, but travellers are navigating a complex landscape shaped by economic pressures, evolving expectations, and geopolitical uncertainty.

The Spring 2025 Travel Pulse report by commerce media company Criteo–drawing on data from hundreds of online travel agencies, airlines, hotels, and consumer insights– reveals key shifts in behaviour, budgets, and booking trends.

Despite these challenges, travel bookings globally outpaced retail sales, indicating strong seasonal demand. This trend was particularly pronounced in EMEA (Europe, the Middle East and Africa) and the Americas.

Also Read: Southeast Asia’s travel tech boom: The startups powering a US$73B industry

However, Asia Pacific travel also left retail behind from July to October, with bookings outperforming retail sales by more than 12 index points during this period.

Key trends and data points:

Shifting traveller mindset: Travellers choose adventure over routine, opting to ‘switch it up’ from long flights to campsites. They are planning smarter, browsing for longer, and increasingly turning to AI for advice, prioritising booking “right” overbooking fast.

In late March 2025, air travel bookings took nine days on average from first search to purchase, while hotel bookings took 12 days. The path to booking a hotel stay is particularly competitive, with travellers viewing five times more hotel options than flights.

Value and flexibility reign: Travellers still desire distance but are sensitive to price. They are making trade-offs to find savings without sacrificing the experience. Offering flexibility, perks, and a sense of value is crucial for securing bookings and fostering repeat business.

Common cost-saving tactics include booking far in advance (42 per cent globally), travelling during off-peak seasons (38 per cent), and choosing less expensive destinations (37 per cent).

APAC and OTA growth: In Q1 2025, online travel agencies (OTAs) led global year-over-year growth. Performance remained particularly strong in APAC across categories, including air and hotel bookings, which saw slower growth in other regions. The Americas led OTA growth at +19 per cent.

Long-haul on the rise: Long-haul flights (greater than 2,500 nautical miles) are gaining popularity, up 7 per cent year-over-year in the Americas and 3 per cent in both APAC and EMEA. Marketers are advised to promote ‘dream destinations’ and upsell premium offerings.

Ground travel gains traction: While air travel still leads globally (54 per cent), ground travel is gaining speed. In Japan, trains are now the top transportation choice (51 per cent), surpassing planes (42 per cent). Car rentals are also seeing increased traction in the US (34 per cent). Personal vehicle use is declining globally, down 4 points.

Accommodation diversity: Hotels remain the leading accommodation choice (70 per cent globally), but travellers are exploring housing rentals (27 per cent), personal accommodation (26 per cent), and camping (14 per cent). This branching out is noted particularly among experience-seeking segments.

Booking windows vary: Travellers planning longer stays (15+ days) book significantly earlier, nearly 100 days in advance, which is five times earlier than those booking shorter trips.

Regional booking habits differ; Europeans book well in advance, US travellers are more last-minute, while Japan and South Korea favour booking about a month out. Only 26 per cent of US travellers booked 2+ months ahead in Q1 2025, down from 33 per cent in Q1 2024, indicating a shift towards more spontaneous travel in the US. APAC habits remained steady.

Generational transportation preferences: Millennials and Gen Z are more likely to choose planes and trains, while Boomers and Gen X still prefer driving, with personal vehicles second only to air travel for these groups.

AI’s growing role: Use of AI for planning travel activities, sightseeing, and full itineraries is growing, reflecting rising trust in AI for inspiration. Globally, 41 per cent find AI useful for activities/sightseeing, 41 per cent for destination ideas, and 40 per cent for accommodation suggestions. Japan sees particularly high use of AI for holistic/full trip planning (47 per cent) and destination ideas (49 per cent).

Also Read: Data security, solo travel, and space tourism drive growth in travel services: Report

Importance of reviews and loyalty: Good reviews are the top factor globally when comparing travel providers (64 per cent), followed by free cancellation (52 per cent) and special offers (47 per cent). Loyalty programmes also influence decisions, especially in the US (41 per cent) and UK (30 per cent). Consistency in service and pricing are key reasons travellers return to the same provider.

Geopolitics on the radar: While only 26 per cent of travellers globally actively track geopolitics for travel planning, it is the fastest-growing concern, up 12 points year-over-year. South Korea saw a notable increase of +24 percentage points in travellers who factor geopolitical matters into their plans.

APAC inspiration sources: Family and friends are the top source of inspiration globally (55 per cent). However, media preferences vary; South Korea and Japan favour blogs (South Korea 50 per cent) and print publications (Japan 32 per cent), while the UK and US lean towards peer advice and digital media like podcasts. Travel booking sites are a significant inspiration source globally (44 per cent).

Eco-conscious travel: 15 per cent of European travellers actively try to lower their carbon footprint, rising to 28 per cent among those who identify as eco-conscious shoppers.

Experiential travel: Tourist attractions are a top priority (59 per cent), but shopping (45 per cent), nature activities (43 per cent), and food/wine tours (36 per cent) are also highly desired. Notably, 60 per cent of international travellers from APAC and 55 per cent from the US prioritise food-related attractions.

Affluent traveller spending: Affluent travellers show increased purchase likelihood across various categories while travelling. In APAC, they are significantly more likely to buy makeup (+102 per cent) and perfume (+113 per cent) compared to average travellers. The Americas and EMEA also see lifts in categories like handbags and fragrance.

Mixed financial outlook: Only 23 per cent of travellers globally report an improved financial situation compared to a year ago. However, the majority (two-thirds globally) either maintained or increased their travel spend in the last 6 months compared to the previous year. Optimism for future finances is highest in the US and UK, and lowest in Japan.

Average booking value shifts: Q1 2025 saw average booking values surge for car rentals, hotels, and OTAs. APAC hotels saw standout gains (+23 per cent). In contrast, air travel average booking values dropped across all regions.

The report underscores that travel remains an essential part of many lifestyles (half of travellers globally consider it essential). Despite rising costs, two-thirds of travellers globally maintained or increased their travel spend in the last six months.

Also Read: ‘Our early SEA years were a great training for the challenges of MENA’: Wego CEO

Navigating this environment requires marketers and travel platforms to be agile, focus on value and flexibility, leverage data and AI for personalisation, and tailor strategies to regional and generational preferences. The trend towards longer browsing windows and the increasing use of AI in planning mean staying visible across the entire booking journey is more important than ever.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

The image was generated using Grok.

The post 2025 travel trends: Long-haul flights and AI planning take off in APAC appeared first on e27.

Posted on

AI at the edge: Resilience over flash

AI is everywhere, finding applications in unexpected places. It also sparks conflicting arguments about how it should be applied and what safety implications follow.

Take this thoughtful post I found on LinkedIn, penned by Heman Gorgi. He reflects on how Elon Musk has justified using a single sensor type by claiming sensor fusion poses safety risks. To me, that position feels self-serving given Tesla’s decision to drop additional sensors in favour of camera-only solutions.

Gorgi contrasts this by explaining how other operators are deploying multi-modal sensor suites and tailoring them to specific environments. It’s worth a read.

Why fusion matters

Different sensors bring different strengths. Cameras capture detail, but they are essentially 2D. LiDAR, radar, and IMUs add depth, velocity, and geometry. Together, they create a fuller picture of the world.

Ignoring this is not just a technical choice. It has real-world consequences. A recent lawsuit shows how dismissing sensor fusion can damage a company’s share price and erode public trust. Even Tesla’s own engineers have highlighted flaws in relying on cameras alone, as seen in this WSJ video at the 6m15s mark.

Disagreements between sensors should not be viewed as liabilities. They are often early-warning systems. When one modality is wrong and another is right, that is resilience. AI can arbitrate those disagreements, correct sensors, or initiate safety measures to bring the system to a graceful stop.

Also Read: The AI-first era: Why the model is the new runtime and how Asia can lead

A short history of the debate

This argument is not new. In the early days of autonomous driving, Waymo championed LiDAR as essential. Tesla pushed for a camera-first approach. Mobileye staked a middle ground, building perception models and sensors that could adapt to both.

The divergence reflected two philosophies: design for cost and scalability, or design for safety and redundancy. Back then, LiDAR units cost around $30,000, about the price of an entire car, so resistance from manufacturers was understandable. Prices have since fallen (and continue to fall), however entry-level LiDARs still remain more expensive than cameras.

Musk’s argument is that multiple perception models built from different sensors can lead to conflicting “realities” for hazard perception and object detection. This (in my opinion) is why sensor fusion matters. It creates a single, coherent view of the world, effectively an AI enabled virtual super-sensor. This is also where AI at the edge shows its value. Fusing and calibrating data in real time reduces hardware complexity and simplifies decision-making for higher-level AI modules.

AI at the edge in practice

At my own company, Curium, we utilised AI at the edge not to create flashy features but to enable real-time sensor fusion and calibration.

This capability could in the future help companies like Aurora, Kodiak Robotics, Zoox, Waymo etc to keep their fleets of vehicles safely on the road even when their sensors are affected by debris, vibration, or heat throughout a typical day. When a sensor drifts, our AI algorithms detect the issue and bring it back into safe operating parameters instantly.

This is the hidden side of AI. It is not about chatbots or voice assistants. It is the routine work of checking cameras, LiDAR, radar, and IMUs frame after frame. It ensures that what is in view is where it should be and corrects when it is not. This is the deepest kind of deep tech. It does not make for flashy videos on YouTube. It rarely registers in public perception, but it does create the clean data environment that all other systems depend on.

Also Read: AI in Southeast Asia: The silent force powering today and the engine for tomorrow’s growth

Beyond autonomous vehicles

The power of AI at the edge extends well beyond cars and trucks.

  • Smart cities: Crowd analytics systems use edge AI to track flows of people in real time. Instead of sending every frame to the cloud, AI interprets the scene locally. This preserves privacy while still enabling insights like congestion alerts or evacuation planning.
  • Healthcare: Portable imaging devices and bedside monitors now embed AI directly on the device. Critical alerts, such as a patient’s oxygen level dropping or a fall being detected, are raised immediately without waiting for cloud connectivity.
  • Manufacturing: Edge AI keeps factories running safely. By fusing data from vibration sensors, cameras, and temperature gauges, it can detect when a machine drifts out of alignment and trigger corrections before defective products are produced or systems fail.

In all these domains, the theme is consistent. Edge AI adds resilience. It checks that things are where they should be, validates that signals make sense, and makes the adjustments needed when they do not.

Raising the benchmarks

The benchmark is clear. Autonomous vehicles must be safer than the average human driver. Not perfect, but measurably better. The same standard applies in other industries. AI at the edge needs to consistently outperform what humans alone can achieve.  We also know that public expectations are a little unforgiving.  Should one Autonomous Vehicle get into an accident and it’s a major splash across all the media outlets, denting public perception on the safety and reliability of such systems.

The power of complementary senses

In automotive use cases, cameras, radar, and LiDAR working together provide scale and robustness. The result is resilient systems that can operate in real-world conditions.

In safety-critical applications, the question is not which sensor “wins.” The measure is how well the vehicle orchestrates all sensors. Success comes from leveraging redundancy and complementary sensors to meet the benchmark of safety.

The hidden value of AI

This, to me, is the real story of AI at the edge. It is not the big, flashy demos that make headlines. It is the quiet, practical work of keeping things safe, resilient, and reliable.

AI at the edge does not need to talk back like a large language model. It does not need to generate images or text. It needs to sustain the heavy lifting that humans can’t maintain, that of: Constant calibration. Continuous anomaly detection. Intervention before failure.

This is the kind of AI that scales silently in the background. It builds trust. It enables services that touch millions of people without them ever noticing.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva Pro

The post AI at the edge: Resilience over flash appeared first on e27.