Posted on

The hidden dangers of AI bias: Where it can go wrong

A 2025 study found that AI-generated summaries influenced users to make purchase decisions 84 per cent of the time, even though the summaries contained hallucinated or altered facts in up to 60 per cent of cases.

This is not just a technical flaw. It’s a product liability risk.

If your AI changes the sentiment of reviews and invents product features, it nudges users toward purchases. Then you are no longer just building an AI tool—you are shaping consumer behaviour in ways that may be misleading or even legally questionable.

AI Bias here is not just unfair—it’s conversion distortion.

Every dataset used to train AI systems is essentially a snapshot of the real world. But it’s a snapshot that comes with all the imperfections, prejudices, and historical inequalities of that world.

Let’s explore a few real-world cases where it has gone wrong

AI’s bias in selecting resumes for hiring

For instance, imagine you train an AI to recognise job applicants based on resumes.

If the data that the AI is trained on predominantly includes resumes from a certain demographic, the system may learn to favour that demographic only by reproducing and even amplifying existing biases.

This was precisely the problem with an AI used by a Google in the past to screen resumes.

Google’s AI hiring tool was trained on resumes that were submitted to the company over several years, which unfortunately had an overwhelming bias toward male candidates.

As a result, Google’s AI learned to favour male-associated words and traits, like “aggressive” or “competitive,” and ended up filtering out resumes from women.

The AI had simply learned the pattern of who was hired, not the traits that would have led to success for any candidate, regardless of gender. The algorithm did not have the nuance to recognise gender inequality and instead perpetuated it.

This example demonstrates that AI isn’t immune to the biases inherent in human decision-making. In fact, because it operates based on historical data, it often amplifies those biases.

Whether it’s racial, gender-based, or socio-economic bias, AI can end up supporting societal inequalities if not carefully controlled.

Also Read: The rise of invisible businesses: Why the most powerful companies may be built by one person and AI

Risk of over-optimisation

Another major problem in AI pattern recognition is over-optimisation.

This happens when an algorithm is trained too thoroughly on a specific dataset and ends up “memorising” the data rather than learning the underlying pattern.

As a result, the AI performs well on the data it was trained on but poorly when exposed to new, unseen data. This lack of generalisation can be particularly dangerous when AI is deployed in the real world, where data is constantly changing.

Take the example of an AI model trained to predict stock market movements. If it is trained on historical stock data that covers a period of rapid economic growth, the AI might learn to associate certain market behaviours with positive economic conditions.

However, if the economy shifts and a recession begins, the AI might not recognise the new patterns and could make disastrously inaccurate predictions. This is an issue of over-optimisation. The AI has learned patterns specific to one period in time, but cannot extrapolate useful information for a new scenario.

For example, Wealthfront, a robo-advisor that uses AI to manage investment portfolios, had an incident where its algorithm predicted a market correction and advised its clients to sell off stocks in anticipation of a downturn. However, the correction didn’t materialise as expected, and the stocks that were sold off ended up increasing in value.

AI was reacting to market indicators that pointed to a correction, but it failed to account for other factors, such as market sentiment and long-term trends. It was a case of model overfitting, where the algorithm focused too narrowly on historical patterns rather than adapting to evolving market conditions.

AI’s bias in healthcare at IBM

Imagine an AI that has been trained on a specific subset of medical data that doesn’t account for all possible patient conditions.

If that AI is used to make medical diagnoses in the real world, its inability to adapt to new conditions could result in missed diagnoses or, worse, fatal errors.

IBM’s Watson AI for Oncology was designed to help doctors diagnose and treat cancer by analysing medical data. However, it was revealed that the system was providing unsafe and inaccurate treatment recommendations, as it was trained on limited and biased data. In some cases, Watson made recommendations that didn’t align with clinical standards, and it struggled with real-world data complexity.

Lack of contextual learning

While AI systems are excellent at recognising patterns within the scope of the data they are trained on, they lack the ability to understand the context in which these patterns occur.

Humans have the capacity for empathy, ethical reasoning, and a broader understanding of the world, which is something that AI simply cannot replicate yet.

Also Read: The art of AI integration: Growing your business with chatbots and human expertise

AI’s bias in criminal justice

A glaring example of this is AI’s use in criminal justice, particularly in predictive policing. Predictive policing algorithms use historical crime data to forecast where crimes are likely to occur, in an attempt to optimise law enforcement resources.

However, these algorithms are prone to problematic outcomes because they don’t understand the socio-economic or political context behind why crimes are committed in certain areas.

For instance, if an AI system identifies a pattern where certain neighbourhoods have higher crime rates, it might suggest that police patrols be concentrated in those areas. But it may fail to account for systemic issues such as poverty, lack of education, or historical over-policing, which contribute to these higher crime rates in the first place.

Instead of addressing the root causes of crime, the AI ends up reinforcing a cycle of surveillance and criminalisation that disproportionately affects marginalised communities.

For example, the COMPASS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm was used in the US criminal justice system to predict the likelihood of recidivism (repeat offences) and inform parole decisions. Investigations found that the system was biased against Black defendants, giving them higher risk scores compared to white defendants with similar criminal histories.

In essence, the AI has no moral or ethical compass to guide its decisions. It simply follows the data, leading to outcomes that may perpetuate harm rather than reduce it.

The risk of “invisible” bias

One of the more deceptive aspects of AI’s bias is that it’s not always obvious. Often, AI systems are seen as impartial or objective because they aren’t influenced by human emotions, subjective opinions, or personal experiences.

However, the reality is that human biases are embedded in the design and deployment of these systems in ways that may be invisible to users.

Consider facial recognition Software in China. Chinese facial recognition technology has come under fire for disproportionately misidentifying certain ethnic groups. A recent study showed that in regions with minority populations, facial recognition models had higher error rates, leading to false arrests and discrimination.

While these issues might seem specific to the technology or country, they highlight a larger trend: AI systems built without local context or inclusive data can fail spectacularly when deployed at scale.

These biases often remain hidden because, to the untrained eye, the system “seems” to work fine when tested on a homogenous group.

This issue of invisible bias is compounded by the fact that the vast majority of AI models, especially those used in industry and business, operate as “black boxes.”

The decision-making processes of many AI systems are not transparent, meaning the users of these systems may have no idea how or why the AI made a particular decision.

When these decisions have real-world consequences, such as who gets approved for a loan or who gets hired for a job, there’s little accountability or recourse for those affected.

So, how to tackle these AI’s bias? Let’s find out some interesting solutions explored by a few startups here.

Also Read: AI and ethics in digital marketing: Building trust in the tech era

Pymetrics

A startup focusing on AI-driven recruitment tools introduced an ethical AI framework by using neuroscience-based games and algorithms that assess candidates’ cognitive and emotional abilities rather than relying on resumes or biased historical data.

They also partnered with the Fairness, Accountability, and Transparency community to ensure their models are regularly audited for fairness, ensuring that their system doesn’t perpetuate bias.

Impact: This approach provides a more equitable hiring process and has led to a more diverse and inclusive workforce for companies using their platform.

Truera

An AI explainability startup developed an AI model monitoring and auditing tool that not only explains model decisions but also helps identify and mitigate bias in machine learning models. The platform uses visualisations and diagnostics to show if certain demographic groups are disadvantaged by a given model.

Impact: By identifying hidden biases in complex AI models, Truera helps companies correct these issues before they impact real-world outcomes, promoting fairness in automated decisions.

Zest AI

It focuses on making AI-driven lending fairer by using an alternative credit scoring model that analyses a wider variety of factors, including behaviour and transaction history, instead of just traditional credit scores. They also continuously test their models for bias against different groups to ensure equitable access to financial services.

Impact: Zest AI’s methods have led to more accurate credit assessments, increasing loan approvals for underrepresented groups without increasing risk for lenders, thus reducing financial inequality.

H2O.ai

A startup known for its open-source machine learning tools introduced an automated tool that integrates with its platform to detect and mitigate bias. Their solution uses fairness constraints during training to ensure that models do not favour one group over another, regardless of sensitive attributes like race, gender, or age.

Impact: Their tool, “Fairness.ai,” has been adopted by companies looking to build more transparent and accountable models that are less prone to bias, enhancing trust in AI-powered decision-making.

One of the most important things to remember is that while AI has immense potential, it’s not inherently neutral or infallible.

Its power and effectiveness are entirely dependent on the way it is designed, trained, and used.

In nutshell

As AI continues to evolve, its ability to recognise and predict patterns will only improve.

The key lies in ensuring that the humans who design and deploy these systems are aware of these risks and work to make AI a force for fairness, equity, and progress. In the end, the true power of AI will be in its ability to enhance human capabilities, not replace them.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The hidden dangers of AI bias: Where it can go wrong appeared first on e27.

Posted on

Why 2026 will be the year AI moves from hype to mandatory safety infrastructure

Across Asia, the scale and intensity of industrial development have transformed its skylines, logistics corridors, and manufacturing capacity in less than two decades. Yet one issue still persists- safety systems have not matured at the same pace.

The numbers illustrate this pressing challenge. The Asia-Pacific region accounts for almost 63% of global workplace fatalities. The rate of fatal injuries has reached 12.7 deaths per 100,000 workers, which is four to five times higher than those recorded in Europe. The majority of these incidents occur in construction and manufacturing sectors, where dynamic environments, heavy equipment, and evolving site conditions create constantly shifting hazards.

As the issue of workplace safety still persists, Regulatory bodies throughout Asia have begun to take a firmer approach, with many jurisdictions transitioning from guidance to enforceable requirements.

This is where the context of artificial intelligence (AI) in workplace safety has moved from experimentation to strategic consideration, and we examine this turning point in safety infrastructure, while also looking at its shortcomings.

Regulation quietly turning safety technology into policy

One of the clearest signals that AI-enabled monitoring is transitioning from innovation to infrastructure is the regulatory change introduced across the region.

For instance, in Singapore, the Ministry of Manpower (MOM) took a decisive step by requiring Video Surveillance Systems (VSS) on construction projects where high-risk activities occur, including work at height, lifting operations, excavation zones, and areas with heavy machinery valued at SG$5 million (3,890,747.80) or more since June 2024.

The policy formed a part of the broader Workplace Safety and Health Council framework, which aimed at strengthening oversight and accountability on complex job sites. Alongside the VSS requirement, regulators have increased the maximum penalties for serious safety breaches from SG$20,000 (US$15,560) to SG$50,000 (US$38,900), reinforcing leadership accountability for workplace safety outcomes.

Singapore is not alone in this direction. South Korea’s AI Basic Act, implemented in January 2026, introduces governance frameworks for responsible AI deployment, while Vietnam passed Southeast Asia’s first comprehensive AI law in December 2025.

Across the region, policymakers are shifting from voluntary guidelines toward enforceable frameworks that expect organisations to demonstrate greater transparency and oversight in risk management.

Taken together, these developments point to a broader regional shift — safety technology is no longer viewed purely as operational improvement. It is becoming part of compliance architecture.

From AI cameras to building a cognitive infrastructure

Understanding why regulation is moving in this direction requires looking at what the technology itself is now capable of and how fundamentally it has changed since the first generation of site cameras.

For example, the early generation of digital safety tools focused primarily on recording incidents. Cameras integrated with AI modules captured events, logged documented violations, and reported inspections or accidents that occurred.

The modern AI-enabled systems in 2026 represent a fundamentally different model. Instead of documenting what already happened, they are designed to interpret conditions as they develop.

Computer vision algorithms can monitor scaffolding structures, detect missing guardrails, identify workers operating without harnesses, or track unsafe interactions between forklifts and pedestrians. Sensor networks connected to IoT devices can detect abnormal heat patterns, gas leaks, or environmental conditions that precede fire or chemical hazards.

Large organisations have begun experimenting with this model. Companies such as Intel, Shell, and Komatsu have explored AI-based monitoring and predictive analytics to improve operational safety and asset reliability.

The shift we are witnessing in industrial safety right now is no longer just about experimenting with AI. It is about recognising that modern worksites generate far more risk signals than periodic human supervision can realistically manage. As regulators strengthen oversight and require greater visibility into high-risk activities, technologies capable of continuously interpreting site conditions will inevitably become part of safety infrastructure.

His point speaks to something the regulatory data already confirms — the volume and velocity of risk events on modern worksites have outpaced what traditional supervision models were designed to handle.

The limitations of mandatory safety automation

Despite its promise, AI-driven safety infrastructure is not without its challenges. As adoption grows, organisations are confronting several operational questions that remain unresolved.

One of the most frequently cited concerns is alert fatigue. When monitoring systems generate too many notifications—especially false positives—safety teams can become desensitised, potentially overlooking genuine hazards.

Data governance is another critical issue. Vision AI-based monitoring systems generate significant volumes of sensitive information about workers, site operations, and infrastructure. Ensuring that this data is stored securely and used responsibly is essential, particularly in jurisdictions with evolving data protection laws.

Platforms today align with global worker privacy regulations like General Data Protection Regulation (GDPR) and enhance their safety modules with features like face blurring, anonymisation and client ownership to overcome this issue.

These are not reasons to slow adoption — they are design challenges that organisations must build into their implementation strategy from the outset. The question for 2026 is not whether to deploy AI safety infrastructure, but how to deploy it responsibly.

Why 2026 matters in building an AI-based safety infrastructure

Several forces are converging to make 2026 a genuine inflection point for workplace safety across Asia. Regulators are introducing enforceable digital oversight frameworks. Infrastructure projects are growing in scale and complexity. And the barrier to AI adoption is falling as platforms mature and costs normalise.

At the same time, the stakeholder environment has shifted. Investors, insurers, and regulators are demanding greater transparency in operational risk management — and AI-driven monitoring systems are emerging as the clearest way to demonstrate it.

The transition will not eliminate workplace accidents overnight, and technology alone is never sufficient. But the trajectory is now clear. For organisations operating in advanced regulatory environments like Singapore, the coming years will determine not whether to integrate AI into safety infrastructure, but how effectively that integration is executed.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Why 2026 will be the year AI moves from hype to mandatory safety infrastructure appeared first on e27.

Posted on

Good Friday crypto analysis: Is low liquidity and volume setting up a crypto crash to US$2.17T?

The crypto market’s slight 0.96 per cent retreat to a total capitalisation of US$2.3T over the last 24 hours reflects a broader narrative. Digital assets are no longer operating in isolation. They move in lockstep with traditional finance, and the current macro-driven consolidation proves this integration. The 82 per cent correlation with the S&P 500 is not a coincidence. It signals that crypto now functions as a rates-sensitive risk asset, reacting to global monetary shifts rather than internal blockchain catalysts. This reality challenges the early promise of decentralisation as an independent financial layer and presents an opportunity for those who understand how to navigate the convergence of traditional markets and digital innovation.

Japan’s 2-year government bond yield, which climbed to a 31-year high of 1.385 per cent on April 3, 2026, triggered the latest pressure on risk assets. That move strengthened the dollar and sent ripples through equities and correlated instruments like crypto. I have long argued that monetary policy remains the dominant force shaping asset prices, and this episode reinforces that view. When global yields rise, capital rotates toward safety, and speculative assets face headwinds regardless of their technological merit. Crypto’s reaction here confirms its maturation into the global financial system, but it also highlights a vulnerability. The sector still lacks the insulation that true decentralisation could provide if regulatory frameworks embraced innovation rather than constraining it.

Altcoin weakness compounded the broader market dip. Bitcoin dominance holding at 58 per cent suggests capital remains parked in the flagship asset, and smaller tokens faced disproportionate selling. StakeStone’s STO token is crashing by over 55 per cent due to large holder movements and an imminent token unlock, illustrating how sector-specific stress can amplify in low-liquidity environments. Spot volume declining 5.51 per cent means every sell order carries more weight, dragging the total market cap lower with less resistance. I have seen this pattern repeat during past consolidation phases. When liquidity dries up, volatility increases, and projects with weak fundamentals or concentrated ownership structures suffer first. This dynamic underscores why I advocate for deeper liquidity pools and more distributed token ownership as essential components of resilient Web3 infrastructure.

Also Read: While stocks rally, gold hits US$4,780 and crypto correlation tells a hidden story

The near-term technical picture offers a clear framework for what comes next. The market currently tests the 78.6 per cent Fibonacci retracement at US$2.33T, with a critical swing low at US$2.27T. A daily close below that level could open a path toward the yearly low of US$2.17T. The Fear and Greed Index, sitting at 28, labelled Fear, suggests participants feel cautious but not panicked. That sentiment aligns with a market awaiting direction rather than reacting to fresh catalysts. The SEC’s CLARITY Act roundtable on April 16 represents the next major inflexion point for regulatory sentiment. I have spent considerable time analysing how policy shapes crypto markets, and this event could provide the clarity that institutional participants need to commit capital with conviction. Until then, sideways movement between US$2.27T and US$2.33T appears the most probable path.

Broader market context adds nuance to this crypto-specific view. US equity markets closed on April 3, 2026, for Good Friday, meaning weekly performance reflected Thursday’s close. The S&P 500 ended the week up 3.4 per cent at 6,582.69, the Nasdaq Composite gained 4.4 per cent to finish at 21,879.18, and the Dow Jones Industrial Average rose 3.0 per cent to 46,504.67. Those gains snapped a five-week losing streak, and crypto did not participate in the relief rally. This divergence warrants attention. It suggests that digital assets remain more sensitive to rate expectations than equity momentum, at least in the short term. Asian markets showed strength with Japan’s Nikkei 225 rising 1.28 per cent to 53,135 points and Hang Seng futures trending higher by roughly 0.6 per cent. The 10-year Treasury yield eased slightly to 4.31 per cent, indicating investors continue to weigh recession risks against surging energy costs.

Commodities added another layer of complexity. Brent crude settled near US$109 per barrel while WTI traded around US$111 as of late Thursday, keeping inflation expectations elevated. Gold saw renewed demand, particularly in Singapore, following a sharp earlier drop. Precious metals often serve as a barometer for risk sentiment, and their resurgence hints at underlying anxiety despite equity gains. Political developments further cloud the outlook.

Also Read: The keys to your kingdom: Navigating crypto custody in 2026

The Trump administration’s authorisation of 100 per cent tariffs on certain imported patented medicines introduces new uncertainty into global trade and pharmaceutical supply chains. Geopolitical tensions around Iran and Oman, with reports of a potential protocol to monitor shipping in the Strait of Hormuz, offered a brief hope for de-escalation but left markets monitoring every headline. Corporate news like SpaceX targeting a valuation exceeding US$2T for a potential IPO captures imagination, and such mega-listings also concentrate capital attention away from smaller, innovative projects in both traditional and digital markets.

My perspective on this consolidation phase centres on three convictions.

  • First, crypto’s correlation with traditional markets is a transitional phase, not an endpoint. As decentralised infrastructure matures and regulatory frameworks evolve, digital assets can reclaim their role as independent stores of value and mediums of exchange.
  • Second, liquidity remains the lifeblood of healthy markets. The 5.51 per cent drop in spot volume demonstrates how fragile sentiment becomes when participation wanes. Projects that prioritise deep, resilient liquidity pools will weather volatility better than those reliant on speculative momentum.
  • Third, regulatory clarity cannot come soon enough. The SEC’s April 16 roundtable on the CLARITY Act represents a critical opportunity to establish rules that foster innovation while protecting participants.

Support at US$2.27T must hold to prevent a deeper retracement toward US$2.17T. A break above US$2.33T could signal renewed confidence, especially if accompanied by rising volume and positive regulatory signals. Until then, cautious consolidation appears to be the baseline scenario. I view this period not as a setback but as a necessary phase of digestion. Markets that advance too quickly without solid foundations often correct more severely later. The current pullback allows participants to reassess fundamentals, strengthen infrastructure, and prepare for the next leg of growth. Those who focus on building rather than speculating will emerge stronger when clarity arrives.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Good Friday crypto analysis: Is low liquidity and volume setting up a crypto crash to US$2.17T? appeared first on e27.

Posted on

AI and the art of team building: Lessons for startup leaders

Last week, I sent out a hiring alert. A client I work with needed someone for his startup—creative, active on social handles, good at thinking on their feet, and aware of digital marketing channels. A standard entry-level role with an eagerness to learn and grow her career. No biggie!

Except there was one additional requirement: the candidate would have to be keen enough to learn AI tools and use them in daily operations. Again, not a problem, one would think? Gen Z, after all! They would have adapted to AI like our 14-year-old selves did to the internet.

And here is where it all flipped! I received two responses to the posting.

Get a vibe marketer

One suggestion was to just get a “vibe marketer.” If, like me, you’ve only recently heard the term, here’s what it means: vibe marketing is an AI-powered approach that enables one person to accomplish a lot of what ten specialists can do.

I was quite apprehensive. I’ve seen hundreds of tutorials on it, have learned two or three of those tools, used them to build posts and a video here or there, but to move a startup’s entire executional think-tank to a person who literally knew it all seemed far-fetched.

And the reality? Well, it was! I didn’t get a single CV with vibe marketing credentials—actually, far from it.

Which brings me to my second dilemma.

Also Read: Bridging the skills gap: Tailored L&D programs for cultivating top tech talent in Asia

Recruit an AI-first talent

The other response was to get an entry-level candidate and train her on AI. What seemed great on paper turned out to be not so great in execution. Trainings take time, energy, and work. If you need someone to hit the ground running, then time is a luxury.

Also, taking initiative, being super passionate to learn new things, and doing it while burning the midnight oil—these don’t make it onto the ideal work-life balance checklist. And it’s a tough gig—execute and learn on the side—not for the faint-hearted.

How do we navigate talent and build teams in the age of AI?

AI isn’t just changing how we work. It’s changing what we need to learn, who we hire, and how we build. But in a sea of endless tools and tutorials, the real challenge isn’t adopting AI, it’s anchoring it to what actually matters.

  • Anchor AI training in what matters most

“These are the best of times. These are the worst of times.”

While AI has opened up great opportunities to scale, build, and grow, it has also made it a largely overwhelming environment for talent across ages and experiences. From prompt engineering to agentic AI—there isn’t just one place to go and up-skill. And there is a new tool launched every day.

So, one way to know what to learn is to build an AI strategy that is closest to your business—and then train new and current workforce on the tools most relevant to that, or those likely to create impact in efficiency, operations, or more. Beyond that is just frenzy and noise. Sure, now websites and apps can be made without code—but is that what you need to make now?

  • AI tools are plenty, but impact takes patience

Developing these AI skills will take time.

And the ecosystem needs patience from leaders—founders such as us. Not all tools are perfect. The free versions run out of steam quickly, and there is only so much a startup can pay for expensive AI tools, at least until their efficiency is well established.

Also Read: Future-proofing businesses and talent through technology

This reminds me of COVID-19, when we all thought online was the way consumers would live, breathe, and shop—until we saw all that euphoria die down, and water seek its own level.

AI will perhaps not be quite similar, because the potency of the technology is well established, but the best solutions will rise above the millions of me-toos.

  • Behind every tech stack is a human stack

Lastly, it is important to understand that irrespective of technology and where it takes us, developing and building talent is about human connection and relationships.

And that needs to be at the core of building teams and navigating the new rules of this game. No one will step into expertise without the grind of entry-level jobs. So they may be reshaped, but they are here to stay.

My checklist for this is going to be all about the right attitude. As my mentor advised me, skills can be learned, but attitude is everything.

In the end, hiring in the age of AI isn’t about finding the perfect resume or the most advanced prompt engineer. It’s about spotting curiosity, grit, and a willingness to learn. The tools will keep evolving but it’s the people who are adaptable, open, and grounded who’ll build the most meaningful things with them.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post AI and the art of team building: Lessons for startup leaders appeared first on e27.

Posted on

The heart of innovation: Why human-centric technology requires a cultural foundation

We just secured our first major client for the sustainability tech tool my startup built — validation after years in corporate leadership. Yet instead of triumph, I felt hollow.

My hands shook from exhaustion, not caffeine. This was not my corporate life; this was my reinvention of a post-30-year MNC career, launched amid COVID-19 lockdowns.

That moment crystallised my turning point: ambition without humanity is a dead end.

What was at stake

When I left my global corporate role to explore entrepreneurial opportunities in Singapore and ASEAN, I underestimated the visceral shift. Startups demand ruthless prioritisation when you lack institutional resources.

My first venture, co-building an AIML lifecycle assessment (LCA) tool, imploded despite brilliant technical minds. We had prioritised “hi-tech, low-touch” over humanity. Egos clashed, motivations misaligned, and trust evaporated.

We mastered technology but failed at the fundamentals:

  • Transactional dynamics replacing shared purpose
  • Broken communication despite “smart” people
  • Task obsession that dissolved team cohesion

The cost? My well-being, relationships, and ultimately, the venture itself.

The mind-shift that changed everything

That failure forced brutal honesty. I realised: “Hard skills are overrated. Soft skills build businesses.”

In my next venture (a green-economy investment platform), we flipped the script:

  • Hired for Ikigai, not Just IQ: We prioritised collaborators who shared our purpose – not just technical virtuosos. I specifically reinforced empathy in data interpretation.
  • Designed for trust, not transactions: We instituted rituals like weekly vulnerability check-ins and co-created values. We instituted “No-Meeting Wednesdays” for deep work.
  • Measured humanity metrics: Team health (anonymous pulse surveys) became as tracked as KPIs. Burnout prevention wasn’t soft – it was strategic.

Also Read: Building a more human and engaged workforce in the age of AI

The unfinished journey

The corporate safety net is gone, but the freedom is worth it. I have learned to chase impact sustainably – protecting my mornings for family, outsourcing non-core tasks, and saying “no” to hustle theatrics.

Though I have since pivoted from the earlier failed venture, the lessons stick:

  • Tech enables, but people build. GenAI won’t fix broken trust.
  • Alignment > acceleration. A team rowing together beats solo sprinters.
  • Ambition needs humanity as its compass.Whether you’re reinventing yourself post-corporate life or building a startup: “Don’t let ‘hard skills’ blind you to the soft infrastructure that makes teams thrive.”

Bridge to present: Human-centric tech in the Age of AI

Today, as GenAI and agentic agents dominate headlines, my approach is starkly different from my early “hi-tech, low-touch” misstep. The allure of “sexy tech” has not faded but my North Star has regained more prominence. AI is a tool, not a torchbearer. It must serve human purpose, not eclipse it.

In my current work, this means:

  • Using AI to amplify—not automate—judgment: We deploy tools to handle data-crunching (like market trends or ESG metrics), freeing our team for what truly matters: interpreting insights through empathy, contextual wisdom, and ethical discernment.
  • Guarding against digital drift: We actively resist letting tools dictate pace or priorities. “Speed” isn’t king; clarity of purpose is. Every AI integration starts with: “Does this deepen human connection or dilute it?”
  • Building ethical guardrails: We co-create protocols ensuring AI enhances transparency (e.g., explaining algorithmic biases) and accountability—never replacing hard conversations or trust-building.

The critical shift? We design around human needs first. Tech follows.

Just as I learned that teams thrive on soft infrastructure, I now see: “Human-centric technology isn’t built with code, it’s built with culture.”

We use AI to remove drudgery, not humanity. To spark collaboration, not replace coffees where real trust ignites. And to extend our impact—not outsource our conscience.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post The heart of innovation: Why human-centric technology requires a cultural foundation appeared first on e27.

Posted on

Singapore aims to lead in AI — but where’s the talent?

Much like the telegraph, railway, and internet before it, AI promises to reshape economies, societies, and the very nature of work. For technologists, AI’s potential impact could dwarf its predecessors, with implications for nearly every industry and function, across manufacturing, mobility, retail, and beyond. 

One thing is certain: AI is here to stay, and the digital gold rush is in full swing. If we revisit John von Neumann’s famous 1955 essay, “Can We Survive Technology”, we can see why adopting transformative technology at scale is mission critical: countries that lag behind will be left behind — both economically and existentially.

Singapore has no intention of being left behind. 

Its ambition to carve out a leading role in the global AI race is plain to see. The city-state is on a mission to triple its AI talent pool to 15,000 practitioners by 2025, a goal anchored in the government’s updated National AI Strategy (NAIS 2.0). This runs parallel to massive investment in AI infrastructure, with more than US$1 billion being deployed into AI over the next five years.

However, while businesses are eager to ramp up AI hiring, the supply of specialists and candidates who can incorporate AI tools into their existing workflows falls woefully short of demand. This is further exacerbated for startups and high-growth digital natives who often find themselves outgunned by global big tech players. 

For these players to remain impactful, they need to reassess traditional hiring practices, their relationship with global talent, and start prepping for a dynamic regulatory ecosystem.

Plenty of investment, scarce talent, one major dilemma 

The global AI talent pool is still (relatively) meagre. While the best education systems in the world are pivoting to ensure the future workforce are equipped with the tools they need to navigate an automated world, today’s businesses don’t have the luxury of waiting.

AI talent is also frustratingly immobile: only 2.2 per cent of AI professionals move internationally for work each year, according to Boston Consulting Group. In Singapore, this challenge is compounded because, as a city-state, it simply doesn’t have the luxury of access to a hundred million strong domestic talent pool.

So on one hand, in South East Asia, we see that Singapore currently leads regional AI investment, spending a whopping US$68 per capita — dwarfing Indonesia’s US$1.9 and Vietnam’s US$0.95. But on the other, we see the Ministry of Manpower sound the alarm, flagging AI as a critical shortage sector at the very same time. It has the resources but simply doesn’t have enough of the right hands to use them.

For startups, the struggle is even more acute. Established players like Google and Microsoft are aggressively absorbing talent, leaving emerging firms scrambling to compete. We need only look at Google’s 2023 acquisition spree to see this — poaching talent from AI startups backed by over US$2 billion in funding. 

So for those building teams today, making sure their hiring strategy is fit-for-purpose for the automation age is absolutely essential.

Digital transformation demands new hiring playbooks

Looking beyond conventional hiring criteria can help startups address their talent gap. 

Adopting skills-first hiring practices is a good start: instead of insisting on narrowly defined “AI experts,” companies can tap into a broader pool of engineers with foundational AI knowledge for example. The proliferation of modern, intuitive AI APIs makes this strategy more viable than ever.

Also Read: How Remote is pioneering global talent management and the future of work

Re-skilling and up-skilling must form part of any long term strategy as well, but despite what some L&D consultants will tell you, it’s not a panacea to talent woes. Only 53 per cent of Singaporean talent are willing to re-skill for the AI era for example, lagging behind Southeast Asia’s 63 per cent. 

While a greater emphasis on continuous learning is needed, every talent strategy also needs a robust international element — and like L&D, it needs reassessing.

Winning the AI talent race with cross-border teams

Singapore has long turned to its Southeast Asian neighbours to augment its talent capabilities, mostly via near-shoring and offshoring strategies. In fact, post -pandemic, 98 per cent of Singapore-based companies used outsourced teams for their IT needs.

This outsourcing model has been a cornerstone of economic growth for nations like the Philippines, where the BPO industry contributes nearly US$30 billion annually — over 10 per cent of the nation’s GDP. Traditionally, these BPOs focused on high-volume, transactional roles such as customer support, market research, and back-office operations. 

However, the region has now evolved into a hub for highly skilled, globally lauded talent, particularly in Vietnam, Malaysia, and the Philippines. 

These markets offer an often overlooked untapped talent pool tailor-made for high-growth digital startups. For example, 84 per cent of Malaysia’s workforce already uses AI tools daily, Filipino digital startups raised close to US$1 billion last year, and Vietnam boasts a robust 400,000-strong IT workforce rapidly adopting emerging technologies.

Instead of only relying solely on these neighbours for repetitive operational tasks, startups should now tap into a high-skilled, multi-region workforce at scale. 

Post-pandemic advancements in remote infrastructure and Employer of Record (EOR) platforms, like Remote and Deel, have now made it easier than ever to build agile, cross-border teams. Region agnostic hiring is now streamlined, perfect for companies with little internal red tape. 

This shift allows Singaporean digital natives to fill critical AI gaps without breaking the bank, particularly in mid-level positions where the skills shortage is most acute.

Future-proofing with distributed teams and smart hiring

Embracing this mindset early may pay dividends, as AI begins to reshape these BPO markets entirely. 

Consider Superfocus, an AI-driven startup developing virtual customer service agents capable of responding to customer queries with human-like precision at a fraction of the cost. These virtual agents could soon provide experiences indistinguishable from human interaction, drastically reducing the need for human customer service professionals.

And it doesn’t stop there. Platforms like Electric Twin, launched by the former Digital Advisor to the British Prime Minister, take this transformation a step further. Simulating human behaviour in real time through “synthetic people,” the tool is built and billed as a solution for political strategists to digitise the campaign process. However, if fully realised, it could upend the entire market research industry, rendering lengthy in-person focus groups and traditional surveys obsolete.

While these advancements may trigger alarm bells for some, optimists like investor and early internet pioneer Marc Andreessen see this as a natural evolution. Decrying the “lump of labour” fallacy, he suggests AI is a net good that would simply optimise existing BPO industries. 

And with Malaysia putting digital transformation at the heart of its 2025 Budget, and Google launching an ASEAN wide AI literacy programme, we have reason to be positive.

For Singaporean companies, this would mean SEA neighbours with increasingly capable digital talent pools, fulfilling the same needs but at breakneck speeds.

To ensure Singapore firms can navigate the coming transformation, they must not only embrace flexible global hiring strategies, but consider how to effectively use hybrid hiring models.

Agile talent strategies for a dynamic AI landscape

Many startups are already used to the “Fractional” hire model; in fact, we’ve seen fractional roles skyrocket by 53 per cent since 2020. But typically this has centred around traditional C-Suite hires.

Also Read: Striking the right balance: Financial health, talent retention, and business growth

But startups should start thinking a bit more creatively about how these types of hybrid models can address a wider array of needs. 

For example, fractional hiring could facilitate access to functional expertise far earlier in the growth journey than typical. Understandably, for founders trying to make every dollar from their Series A count, a full time hire might not seem the most judicious use of their capital.

But a fractional functional head could help them navigate what is expected to be a hugely dynamic AI regulatory landscape.

While Europe and the US debate stringent AI regulations, Singapore has taken a lighter approach. This hands-off strategy has currently borne fruit – it’s hard to imagine the aforementioned external AI investment reaching such levels without it.

But this approach won’t stay sedentary, and the EU AI Act in of itself is already having an impact on Singapore based brands working with European customers. As foundation models become more ubiquitous, companies need to keep a keen eye on regional policy in particular, so they can plan effectively and adapt if needed. 

The US and China’s technology standards continue to bifurcate for example, and Singapore’s position as a neutral hub, with strong English and Mandarin proficiency, puts it in a unique position. Building in strategies for a potential technological decoupling between the world’s two largest powers and other macro events could give firms a competitive edge. 

Acquiring capable talent is the key growth

For startups and scale-ups alike, they need to act fast. But, as Singapore-based companies explore the integration of emerging technologies into their operations, it’s important to avoid becoming overly enamoured with flashy new tools. The real value lies not just in the tools themselves but in the skill of those who wield them.

After all, a pen becomes a powerful instrument in the hands of Ernest Hemingway, but in the hands of a monkey, it’s nothing more than a stick.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

Image credit: Canva Pro

The post Singapore aims to lead in AI — but where’s the talent? appeared first on e27.

Posted on

Why startups fail at offshore expansion (and how to fix it)

Most startup founders assume offshore expansion is a simple cost-saving lever. Hire talent in a lower-cost market, plug them into the workflow, and get more done for less. It’s a seductive idea. On paper.

In practice, that shortcut becomes a trap.

Deadlines slip. Communication loops break. Cultural friction erodes morale. Execution quality dips. What founders expect to be a productivity multiplier turns into an operational drag. They blame the talent. They blame timezone issues. They consider outsourcing a failure.

Here’s the real cost most startups don’t factor in when offshoring. In the first one to two months, talent is onboarded but remains unproductive due to a lack of clarity. By month three, everyone is still waiting for “results,” despite never having aligned on what success actually looks like.

Come month four, the founder or early core employee steps in to “fix” things: reassigning work, micromanaging execution, and creating more noise than structure. By month five, morale has dropped, and the risk of attrition climbs sharply. 

By month six, the startup leadership team concludes that offshoring doesn’t work and begins searching for another vendor. 

But this isn’t a talent issue; it’s a leadership design flaw. No one thrives in ambiguity without scaffolding. Not your engineers. Not your ops team. Not your offshore hires. The earlier you build that scaffolding, the faster you move when it matters.

Oftentimes, the problem was never with the offshore team. It’s with the setup.

First: The real failure mode — thinking it is about labour, not systems

Startups fail at offshore expansion because they treat it like a procurement exercise instead of a systems design challenge.

Some founders hire without understanding fit: role fit, communication fit, decision velocity fit. They offload tasks without embedding context. They build parallel workflows without building feedback loops. And worst of all, they expect leverage without local leadership at the helm.

Offshoring is not all about hiring the “cheaper headcount.” It’s not a task-dumping zone. It’s not a way to delay building internal muscle. Offshoring only works when it is operationally integrated, when it becomes part of how the company moves, not a bolt-on to what the company already does.

The moment you view it as a Band-Aid, you’ve already compromised the outcome.

Second: Offshoring needs infrastructure, not instructions

What offshore work actually needs is infrastructure.

It is not about having a physical office nor  a manager-on-the-ground. Infrastructure in the form of clear expectations, feedback channels, shared knowledge, role ownership, and operational context.

In most failed offshore expansions, the breakdown isn’t mysterious. It is patterned. Onboarding is rushed or nonexistent. Standard operating procedures don’t exist. There’s no shared knowledge base for new hires to refer to, so everyone relies on tribal knowledge that doesn’t translate across borders.

Also Read: Human-centric skills in the age of AI: How to never lose touch with humanity in the workplace

Timelines are vague, ownership is unclear, and team members are left to guess what “done” looks like. Communication is reduced to status updates, with little context or strategic alignment. And feedback, when it happens at all, only comes after something goes wrong: when it’s too late to course-correct.

That’s not a recipe for scale. That’s a recipe for expensive rework and silent disengagement.

The truth is: startups don’t have the margin of error to “learn by breaking” when the breakage is happening in the foundation.

Third: Context is the leverage, not the cost

Founders want leverage. But the inexperienced ones rarely give teams the raw material needed to generate it: context.

Context is what allows someone to make decisions without asking. It’s what lets a team member upstream identify a downstream risk. It’s what builds alignment without micromanagement.

Most offshore workers operate with a fraction of the context their onshore counterparts have and not because they’re excluded on purpose, but because founders underestimate the cost of not transmitting it.

A 15-minute briefing call. A Loom walkthrough. A shared Notion page. A recurring stand-up. These are simple systems that change the surface area of decisions that offshore teams can effectively own.

The companies that do this well make context a first-class function. And as a result, they move faster, not slower.

Fourth: Hiring without a system is just gambling

Startups tend to hire offshore the way they build MVPs: fast, messy, under-tested.

That can work for a product. It rarely works for a team.

You cannot drop someone into your chaos and expect them to generate clarity. You need to show them how clarity is created in your company.

Also Read: Are you a human resource?

At FullSuite, we’ve seen this pattern play out over and over. Startups that treat offshore hiring like team building and not task delegation tend to scale faster, burn less cash, and build stronger internal cultures.

They don’t just fill seats. They onboard. They coach. They embed people into the heartbeat of the business. They transfer context, not just task lists. And in doing so, they create actual leverage.

Lastly, executed right, offshoring is a powerful accelerator

The most effective offshore models are those that go beyond staffing and focus on building execution infrastructure: systems, feedback loops, and cultural touchpoints that align remote teams with the company’s core objectives. Offshore teams that are treated like second-class citizens will inevitably perform like they are. Culture isn’t something that happens in a physical office: it’s embedded through systems, rituals, and shared expectations.

If you’re a founder considering offshore expansion, the first shift you need to make is in the questions you ask. Stop asking, “How many people can I get for $X?” or “How fast can I fill these roles?” or “What tools should they use?” These are surface-level questions focused on input, not outcomes. Start asking instead: What does clarity look like in my company? How do I transmit culture across distance? What feedback loops exist between my teams? Is my org structured for delegation or dependence?

The quality of your questions determines the strength of your system. And the stronger your system, the more leverage you create, regardless of where your team sits.

Offshoring done right is not about cost. It’s about capability.

It’s not about delegation. It’s about scale.

It’s not about finding people to execute what you’ve already decided. It’s about building a team that helps you decide better.

Startups that understand this treat offshore expansion not as a workaround, but as the foundation for global execution. They don’t wait until they’re “big enough” to get it right. They get it right early so they can get big without breaking.

Ask yourself: are you building an offshore team to fill gaps… or are you designing a company that works at any scale, in any location, under any condition?

The answer will define whether your startup scales with resilience. Or falls apart at the seams.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post Why startups fail at offshore expansion (and how to fix it) appeared first on e27.

Posted on

AI agents could become the new OTAs — What it means for Agoda and the future of travel

Idan Zalzberg, Chief Technology Officer at Agoda

Enterprise AI adoption has crossed the tipping point. Globally, more than 70 per cent of companies are now using AI in at least one function, while overall AI spending is projected to exceed US$2.5 trillion in 2026. What was experimental just two years ago is now operational.

But results have not kept pace. Many companies are still struggling to turn AI investment into tangible outcomes, exposing a widening gap between capability and execution. The issue is structural. AI is not just another software layer. It changes how software behaves. Traditional systems were deterministic and testable. AI systems are not. They introduce variability, ambiguity, and a new class of risk.

This is forcing companies to rethink leadership, talent, and product design at once. Decisions are increasingly made under uncertainty, engineering roles are shifting toward adaptability, and users now expect outcomes, not just tools. Travel platforms are an early test case for this transition. The category is fragmented, high-intent, and decision-heavy, making it particularly exposed to both the upside and the pressure of AI-driven change.

At Agoda, this shift is already taking shape. The company runs more than 90 generative AI use cases and has backed this with a move to a 26,000 square metre tech hub at One Bangkok, consolidating nearly 4,000 employees. As software becomes less predictable, how teams work together is becoming as critical as the technology itself.

To understand how this plays out in practice, Agoda’s CTO Idan Zalzberg outlines how the company is rethinking engineering, talent, and product in the age of AI.

When AI first hit, there was a lot of noise, fear, and conflicting opinions internally across most companies. How did you cut through that and actually align Agoda around a clear direction?

I think we are all being challenged. When the sea is stormy, the role of the captain becomes much more important. When everything is smooth and predictable, you have to ask yourself, what is my job really? Leadership ultimately comes down to making decisions, and when those decisions are shaped by uncertainty and diverging paths, they matter even more.

For us, developing an “inside-out” AI strategy early on was critical. At the time, people simply did not know what to expect. Should we go into it? Should we use it in programming? Is it good or bad? Is it going to take my job? There were many voices, and a lot of fear.

In that environment, leadership had to step in, make clear decisions, and bring confidence to people. We had to show that we have a strategy, that we have a point of view, and that no one is being left behind. A lot of people were asking what this means for them personally, and we had to give them an answer. This is how we see it. This is our mindset.

It was about making decisions, communicating them clearly, and reinforcing that message consistently to build confidence. I think this was a moment where leadership across companies really had to show up. You can also see examples where that did not happen, where leaders said one thing and then walked it back. That is very jarring, and it breaks trust. Once that confidence is lost, it is very hard to regain.

Also Read: Money on the move: The key to making dynamic travel payments simple

Has the rise of AI changed your philosophy on what makes great engineering talent, and if so, how?

What we are seeing with AI is that it introduces a fundamental difficulty. It makes software unpredictable. Traditionally, software worked in a very clear way. You define exactly what you want, you build it, and then you can verify that it behaves exactly as expected. That model is now gone.

With AI applications, you no longer know exactly how the system will behave. You cannot always guarantee that it will meet the requirements in a consistent way. Even if something works once, it does not mean it will work the same way again. That level of unpredictability is new for most engineers.

Data scientists have been used to this way of working, but now this mindset needs to extend across the entire engineering organisation. That is one of the biggest challenges we are dealing with, and we are still learning how to handle it.

This is also why starting internally was so important for us. We wanted to build that experience and help people get comfortable with the idea that software is no longer fully deterministic. AI is now embedded in many parts of the system, and whenever it is involved, you cannot assume a fixed outcome. It is not as simple as saying five plus five will always be ten. Sometimes it will say eleven.

Because of that, building the right evaluation frameworks and ensuring that you are actually improving with every iteration becomes much harder. It is something we are learning together with our teams.

It also changes what we look for in people. We need engineers who are curious, open-minded, and comfortable working in this kind of environment, because this is not something you can approach with a traditional mindset.

As AI agents become more capable, how do you see the role of OTAs evolving beyond search and booking?

I do think AI agents could become the new OTAs. What we are seeing is that customer expectations are evolving very quickly. It is no longer enough to just provide a search and booking tool. People want more autonomy, more assistance. They want something that actively helps them, not just a platform where they do everything themselves.

Also Read: Trust remains travel’s defining currency: Inside travel’s next operating model at MarketHub Asia 2026

Today, many people still see OTAs as trustworthy but relatively basic. You can search, you can book, but ultimately you are the one making all the decisions. Of course, there is already a lot of intelligence behind the scenes in how we rank and recommend options, and that has been driven by machine learning for a long time. But expectations have shifted.

Users now want more than recommendations. They want context and reasoning. They want to understand why something is being suggested. They expect the system to connect different parts of their journey. If they have just booked a flight, the hotel recommendation should take into account things like distance from the airport, arrival time, and whether early check-in might be needed. They expect a much more holistic and proactive experience.

And expectations rise very quickly once people see what is possible. I remember when AI-generated images first became popular, and people pointed out that the fingers looked wrong. But if you step back, a computer had just generated a photorealistic image from a simple prompt, something that felt like science fiction just a few years ago. Yet almost immediately, the expectation shifted to perfection.

The same dynamic is happening in our industry. As soon as users see what generative AI can do, that becomes the new baseline. They expect more, and we have to evolve to meet that.

Beyond coding, where are you starting to see AI have the most impact across the organisation?

AI is starting to show up everywhere. We have already talked about programming and development, which are clearly strong use cases. But what we are increasingly seeing now is adoption on the office side as well.

This includes tools like Excel and PowerPoint, and more broadly, work that sits between creative thinking and operational execution. Things like creating documents, reading and summarising information, building presentations, and helping people communicate more effectively. These are areas where AI is starting to have a real impact, and it is evolving quickly.

Also Read: Elevating travel experiences: The power of value-added services

On the engineering side, while core coding is already well supported, everything around it is still catching up. For example, reviewing code, debugging incidents in production, and understanding what went wrong are still emerging areas. The ability for AI to reason through issues, analyse problems, and explain what is happening is only just starting to come together.

So while some parts of the stack are already quite mature, many of these surrounding workflows are still in the early stages. That is where we are seeing a lot of new progress right now.

What does the future of travel booking look like if AI can take on a more proactive, end-to-end role for the user?

This is a question we are asking ourselves a lot, and it is more important now than ever. At the same time, it is not entirely new for us, so it does not come as a shock. In fact, it is quite exciting because generative AI is finally enabling a vision we have had for several years.

The easiest way to think about that vision is to look at what travel agents do and why people still go to them. Planning a trip end-to-end is hard and often stressful. There are many decisions to make, and every time you hit that “book” button, there is hesitation. Are the dates correct? Does everything line up? Is this the right hotel, or should I choose another one?

What we want to do is remove that stress while still keeping the user in control. Imagine having a personal travel agent who works only for you, understands your full history, your preferences, what you like and what you do not like. Instead of you doing all the work, they prepare the trip for you and guide you through it.

They might suggest options and ask, how does this look, or would you prefer something different. You can respond, adjust, and refine. Maybe this time you want a more relaxed, beach-focused trip. The system adapts instantly and reshapes the plan around that.

The goal is to create an experience where you still feel in control, but without the stress, and with a high level of trust and confidence that you are getting exactly what you want. That is where we want to go, and we believe we can get there.

Image credit: Agoda

The post AI agents could become the new OTAs — What it means for Agoda and the future of travel appeared first on e27.

Posted on

Technology debt is the risk company boards keep deferring – until it becomes a crisis

Many companies are discovering, painfully, that ambitious digital strategies fail not because of weak ideas, but because of unpaid technology debt.

Legacy systems that were never integrated. Critical platforms are poorly maintained or undocumented. Data is fragmented, unsecured, or poorly governed.

This isn’t technical trivia. It is enterprise risk. And increasingly, it is a board responsibility.

The boardroom blind spot

Technology debt accumulates quietly. Unlike financial debt, it doesn’t appear neatly on a balance sheet.

But its impact is real:

  • Slower execution and failed transformations
  • Cyber vulnerabilities and data breaches
  • Inaccurate or unusable management information
  • Inability to deploy AI or advanced analytics responsibly
  • Regulatory and reputational exposure

When technology debt surfaces, it often does so through crisis: a breach, a system outage, or a stalled strategic initiative.

By then, the board is no longer governing. It is reacting.

Why this is a governance issue — not an IT problem

Boards often delegate technology risk to management or technology committees. That is necessary, but no longer sufficient.

Technology debt shapes:

  • Strategic optionality
  • Speed of decision-making
  • Risk interdependence across cyber, data, operations, and compliance
  • The credibility of growth and innovation plans

If the board approves a strategy without understanding the technology foundations required to execute it, it is approving aspiration — not reality.

Independent Directors, in particular, have a duty to ask: Are we building on solid digital ground or on accumulated shortcuts?

Also Read: Beyond drug discovery: How generative AI is revolutionising content creation in biotechnology

What boards must do differently?

Forward-looking boards are changing their posture from passive oversight to active inquiry.

Demand visibility, not comfort

Boards should insist on:

  • A clear articulation of existing technology debt
  • Its impact on risk, cost, and strategy
  • Trade-offs between remediation and growth initiatives

If management cannot explain this simply, the board does not yet understand the risk.

Link technology debt to strategy approval

Every major strategic initiative, AI adoption, digital transformation, and regional expansion should explicitly address:

  • What technology debt must be resolved first
  • What risks arise if it isn’t
  • What “no-regret” remediation investments are required

Strategy without digital feasibility is not strategy.

Treat data governance as a board-level asset

Poor data governance is the most common and most dangerous form of technology debt.

Boards should ask:

  • Who owns data accountability?
  • How is data quality, access, and protection governed?
  • Are regulatory, privacy, and cross-border risks understood?

Without strong data foundations, AI and automation multiply risk rather than value.

Reframe technology spend as risk reduction

Boards often see technology remediation as a cost. It is more accurately risk insurance.

Independent Directors should challenge:

  • Underinvestment masked as “prudence”
  • Deferred upgrades justified by short-term returns
  • Innovation narratives unsupported by infrastructure reality

Ensure board capability matches exposure

Boards do not need technologists, but they do need technology fluency.

That may require:

  • Up-skilling directors
  • Appointing digitally credible independent directors
  • Changing how technology is discussed — not relegating it to dashboards

A final challenge to boards

Technology debt is the compound interest of governance avoidance.

The longer it is ignored, the more expensive and dangerous it becomes.

The question for boards is not: “Do we trust management on technology?”

It is: “Have we exercised our duty to understand the digital foundations of the business we govern?”

Because in a world where technology underpins strategy, resilience, and trust, delegating technology debt is no longer defensible governance.

This article was first published on The Boardroom Edge.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post Technology debt is the risk company boards keep deferring – until it becomes a crisis appeared first on e27.

Posted on

Making supply chains smarter: When precision computing meets intelligent dialogue

Global supply chains are going through a profound transformation. For a long time, logistics decisions relied heavily on experience and fragmented data. Today, two very different forms of intelligence are starting to work together, quietly reshaping everything from warehouse design and inventory planning to fulfilment and customer service.

Two partners, two kinds of smart

Our thinking is that we believe these two forms of intelligence act as a pair of partners:

  • The Precise Calculator
  • The Intelligent Communicator

The Precise Calculator is a familiar figure in supply chain: demand forecasting models, inventory optimisation, transport optimisation, network design and so on. This is the class of tools that excels at finding patterns in huge, complex datasets and answering questions like: how much inventory should each warehouse hold to avoid both stockouts and overstock? When should we place purchase orders, and in what quantities, to balance cost and service levels?

Precision in practice

We see how, at Cainiao, these capabilities are increasingly becoming part of day-to-day operations. Intelligent engines are used to support demand forecasting, inventory allocation across regions, and replenishment planning. For a beverage chain, this can involve estimating how many drinks thousands of stores are likely to sell and ensuring ingredients are positioned accordingly. For an automotive company, similar approaches help align parts supply with downstream demand, so service levels are maintained while keeping working capital tied up in inventory within reasonable limits.

The impact can often be observed in areas such as improved inventory turnover, fewer stockouts, and more efficient logistics and warehousing processes. From a technology standpoint, this layer of intelligence is becoming more accessible. It typically runs on standard servers and can be deployed in cloud environments or on premises. In practice, performance depends less on specialised hardware and more on whether organisations can provide consistent, high-quality operational data and are willing to allow systems to learn from real-world outcomes over time.

Also Read: Building smart: A tech founder’s guide to the semiconductor supply chain revolution

The Intelligent Communicator: When supply chains learn to talk

The second partner, the Intelligent Communicator, is the recent wave of large language models. These systems excel at understanding natural language, synthesising information, and responding in ways humans find intuitive.

In logistics, this capability first shows up in customer service and knowledge management. In the past, when a customer raised an issue, an agent might have to copy chat logs into a spreadsheet, switch between multiple systems to check orders, inventory and billing, and then manually craft a response. Now, a large language model can read the conversation, identify the customer’s intent, call backend systems through APIs to retrieve shipment status, warehouse data and transaction records, and automatically compose a more accurate and appropriate reply. For cross‑border consumers, multilingual ability is especially valuable.

At Cainiao, we have been exploring AI-enabled customer service applications built on large language models. While this Intelligent Communicator typically requires stronger computing resources, the more important factor in practice is how well it is integrated with domain knowledge and operational workflows. The usefulness of such systems depends not only on fluency but also on whether responses are grounded in a real business context and can be trusted by both customers and frontline teams.

When the two partners start working together

The real turning point comes when these two forms of intelligence stop operating in isolation and start amplifying each other.

The first step is to let the Precise Calculator teach the Intelligent Communicator, using years of high‑quality operational data, so the latter doesn’t just chat—it actually understands supply chain logic. The second step is to bring the Intelligent Communicator into the decision loop, so it’s not just answering questions but helping structure decisions, explain trade‑offs, and surface cause‑and‑effect in the business.

Also Read: Why supply chain AI works in the lab but fails in the real world

From copilot to autonomous agent

Long-term, the goal is to build intelligent agents with a degree of autonomy at key points in the operation. Imagine a scenario like Double 11 or Black Friday: instead of manually coordinating dozens of teams, a supply chain leader interacts with a single interface and sets an objective such as: “Ensure on‑time delivery in our core North America and Europe markets stays above 96 per cent, while reducing overall inventory risk by 10 per cent.”

The system then breaks this goal down into concrete tasks, calls on demand forecasting, capacity assessment, network optimisation and in‑warehouse simulation modules, and takes into account the capabilities of automated warehouses and overseas hubs. The output is a complete operating plan: how to rebalance inventory across different overseas warehouses, which SKUs’ service commitments should be dynamically adjusted, when and where to activate additional automation capacity, and so on.

Building the future, one step at a time

Within our global network, we are already seeing early versions of this evolution. From planning our five‑day global delivery service to coordinating overseas warehouse networks and automation assets, the Precise Calculator is embedded in day‑to‑day operations. At the same time, more natural, intelligent conversational interfaces are being rolled out, allowing teams in different countries and functions to simply talk to the supply chain instead of clicking through endless dashboards.

The journey from basic digitalisation to true intelligence will not happen overnight. It is built step by step. But the direction is already clear. For brands and supply chains accelerating their globalisation, the fusion of precise computation and intelligent dialogue will be a critical pillar of future competitiveness.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Making supply chains smarter: When precision computing meets intelligent dialogue appeared first on e27.