Posted on Leave a comment

The US$80K Bitcoin wall: What happens next could define the next quarter

Bitcoin emerged as a standout performer in this environment, climbing 2.75 per cent to US$78,402.80 over 24 hours. This move outpaced the general rise in equities while remaining tightly coupled to the macro sentiment driving traditional markets.

The primary catalyst for this widespread optimism was US President Donald Trump’s announcement of an indefinite extension of the US-Iran ceasefire. This development effectively removed the immediate threat of conflict near the Strait of Hormuz, allowing investors to rotate back into riskier assets with renewed confidence. The relief was palpable across asset classes, validating the thesis that Bitcoin currently acts as a high-beta proxy for global liquidity and risk appetite.

The correlation between digital assets and traditional equities has never been more evident than in this recent trading session. Data indicates a 95 per cent correlation between Bitcoin and the S&P 500 over the last 30 days, suggesting that both markets are reacting to the same macroeconomic drivers.

As the geopolitical fog lifted, major US stock indices surged to record-high finishes. The S&P 500 rose 1.05 per cent to settle at a fresh all-time high of 7,137.90, completely erasing losses stemming from recent conflict fears. The technology-heavy Nasdaq Composite advanced even further, gaining 1.64 per cent to close at a record 24,657.57. This performance was buoyed by a remarkable 16-day winning streak for chipmakers, highlighting the resilience of the technology sector.

Even the more industrial-focused Dow Jones Industrial Average participated in the rally, adding 340.65 points, or 0.69 per cent, to finish at 49,490.03. The Russell 2000 also joined the festivities, gaining 0.74 per cent to close at 2,785.38, indicating that the bullish sentiment was broad-based and not limited to just the largest-cap stocks.

Bitcoin’s rally was not merely a passive reflection of stock market gains but was amplified by specific dynamics within the cryptocurrency market structure. A significant short squeeze played a crucial role in accelerating the price action. As the price began to climb following the ceasefire news, leveraged bearish positions were forced to close rapidly.

Data reveals that US$198.67M in Bitcoin positions were liquidated over the 24-hour period, with shorts accounting for US$187.33M of that total. This cascade of forced buying created a reflexive loop that pushed prices higher than organic demand alone would have.

The persistently negative funding rate suggests that bearish leverage remains in the system, which could fuel further squeezes if the upward momentum continues. This mechanical aspect of the rally underscores the volatility inherent in the current market phase, where sentiment can shift sharply due to leverage flushes.

Underpinning this technical move was a robust fundamental narrative driven by institutional accumulation. Despite the short-term volatility, long-term demand remains strong. US spot Bitcoin ETFs continued to see strong inflows, signalling that institutional investors are using these dips to add exposure.

Furthermore, corporate buying remains a powerful force, exemplified by Strategy purchasing 34,164 BTC for US$2.54B. This level of corporate accumulation validates the ongoing narrative that Bitcoin is being treated as a treasury reserve asset by forward-thinking companies.

The combination of macro risk-off events ending and this steady institutional bid provides a solid floor for the asset, even as it approaches significant resistance levels. The market is essentially pricing in a scenario where geopolitical stability allows capital to flow freely back into scarce, high-growth assets.

Also Read: Bybit invests US$8M in Hata to crack Malaysia’s regulated crypto market

The equity rally was further supported by a wave of robust corporate earnings that largely outperformed analyst expectations, adding fuel to the fire. Boeing saw its shares surge 5.5 per cent after reporting a smaller-than-expected first-quarter loss and providing healthy delivery projections, a sign that the aerospace giant is stabilising. GE Vernova jumped nearly 14 per cent after beating revenue expectations, underscoring strength in the energy sector.

Tesla also contributed to the positive sentiment, gaining in after-hours trading after beating earnings estimates, although shares later slipped as CEO Elon Musk cautioned about rising capital expenditures. The so-called Magnificent Seven tech names were instrumental in supporting the Nasdaq’s record run, with Apple rising 2.6 per cent and Amazon gaining 2.1 per cent.

Microsoft also played a significant role in the index’s advancement. This breadth of earnings strength suggests that the corporate sector is navigating the current economic environment better than many sceptics had anticipated.

Commodities markets also reflected the shifting geopolitical landscape, albeit with some lingering caution. Brent crude oil climbed over three per cent to settle near US$102 per barrel, marking its first close above US$100 since early April.

This rise was driven by lingering supply uncertainty in the Strait of Hormuz, reminding investors that while the immediate threat of war has receded, the structural risks to energy supply chains remain. Copper prices also jumped nearly two per cent to reach a three-month high of $6.18/lb, indicating strong demand expectations for industrial metals.

In the Asia-Pacific region, markets in Japan, Hong Kong, and South Korea opened higher on Thursday, following the strong lead from Wall Street. This global synchronisation confirms that the risk-on sentiment is not isolated to the United States but is a worldwide phenomenon driven by the hope of stabilised international relations.

Also Read: Bitcoin at US$75,872: Why the next 72 hours will determine if this rally has legs

Looking at the technical landscape for Bitcoin, the asset now faces a critical juncture. The rapid ascent has brought price action directly into a high-conviction resistance zone between US$78,000 and US$80,000, where a major sell wall exists. Traders are closely watching the US$77,160 level, which represents the 50 per cent Fibonacci retracement level and serves as immediate support.

Below that, a massive US$217M bid wall sits at US$75,700, providing a substantial cushion against deeper corrections. The 20-day EMA at US$77,907 is also acting as dynamic support. If buying pressure sustains and Bitcoin closes above the US$80,000 resistance, the path opens for a test of the 127.2 per cent extension near US$80,723.

Conversely, a break below the US$75,700 support level would invalidate the immediate bullish thesis and risk a pullback toward US$72,000.

The market outlook remains decidedly bullish, driven by the confluence of a positive macro catalyst and reflexive market mechanics. The indefinite extension of the ceasefire has provided the breathing room necessary for risk assets to recover, and strong institutional demand ensures that real money supports these higher prices.

The battle between the sell wall at US$80,000 and the bid wall at US$75,700 will likely determine the next directional move within the next 24 to 48 hours. Investors should watch for a decisive break and close above US$80,000 on high volume to confirm continuation.

Until then, the market remains in a state of high tension, balancing the optimism of de-escalation against the technical realities of overextended short-term moves. The correlation with the S&P 500 suggests that as long as equities hold their record highs, Bitcoin has a strong tailwind to challenge its own resistance levels.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The US$80K Bitcoin wall: What happens next could define the next quarter appeared first on e27.

Posted on Leave a comment

Report: AI agents face reliability ceiling as organisations embrace multi-model strategies

The rapid proliferation of AI agents across enterprise environments is reshaping how organisations build and operate software, according to Datadog’s State of AI Engineering 2026 report. Based on telemetry data drawn from thousands of organisations running AI in production, the findings paint a picture of an industry accelerating into complexity—and beginning to encounter the operational limits that come with it.

Two findings stand out. First, the shift toward multi-model strategies is no longer a niche approach; it has become standard practice. Second, AI agents running in production are hitting a hard capacity ceiling, with rate limit errors emerging as the single most common cause of failure.

A multi-model world takes shape

A year ago, OpenAI commanded a 75 per cent share of enterprise LLM usage among Datadog customers. That figure has since fallen to 63 per cent: not because OpenAI lost ground in absolute terms, but because the broader market expanded rapidly around it. The number of Datadog customers using OpenAI more than doubled over the same period, even as Google Gemini and Anthropic Claude gained 20 and 23 percentage points of market share, respectively.

The more telling shift is happening inside organisations themselves. More than 70 per cent now deploy three or more models, and the proportion using more than six models nearly doubled year-on-year. Rather than selecting a single default provider, engineering teams are assembling model portfolios. They are matching lightweight models to extraction and tagging tasks and reserving frontier models for synthesis and reasoning.

This approach offers genuine advantages. Teams can optimise for cost, latency, and output quality at each stage of a workflow. But it introduces significant operational overhead. Coordinating API calls across disparate providers makes it harder to enforce safety and compliance standards consistently and leaves systems more vulnerable when any single provider throttles requests or degrades in performance. The report recommends that teams adopt modular routing mechanisms—such as a gateway service—rather than rely on direct provider API calls scattered across their environments.

Also Read: From fragmentation to shared futures: Re-wiring global digital cooperation from an Asian frontline

The compounding nature of this challenge is also reflected in how organisations manage model versions. Teams are quick to test new releases but slow to retire older models already running in production. Each additional model in the fleet increases evaluation burden and operational risk, a form of AI-specific technical debt that accumulates quietly until it becomes difficult to unwind.

AI agents stall at the capacity ceiling

The second major finding concerns how reliably AI agents perform once deployed. Datadog’s analysis of LLM call failures in customer traces reveals that in February 2026, five per cent of all LLM call spans reported an error with 60 per cent of those errors were caused by exceeded rate limits. The following month, the overall error rate fell to two per cent, but rate limit errors still accounted for nearly a third of failures, totalling approximately 8.4 million incidents in March alone.

The implication is significant. As AI agents take on more complex, multi-step workflows such as orchestrating tool calls, chaining model requests and operating with greater autonomy are running up against the throughput limits of model providers. Reliability, at scale, is becoming a function not just of code quality or prompt engineering, but of infrastructure capacity.

Datadog’s report recommends a combination of operational patterns, including request budgeting and backpressure systems, alongside prompt-level optimisations to reduce unnecessary token consumption.

“AI is starting to look a lot like the early days of cloud,” said Yanbing Li, Chief Product Officer at Datadog.

The parallel is instructive. Cloud computing unlocked enormous capability but demanded an entirely new discipline of operational management. AI agents appear to be following the same trajectory and organisations that invest in observability and reliability infrastructure now may find themselves considerably better positioned as the technology continues to mature.

Image Credit: Igor Omilaev on Unsplash

The post Report: AI agents face reliability ceiling as organisations embrace multi-model strategies appeared first on e27.

Posted on Leave a comment

SEA’s fintech boom: Market demand is real, but the numbers need context

Southeast Asia (SEA) has emerged as Asia’s most fintech-dense subregion, according to a new study by UnaFinancial, an international fintech group headquartered in Singapore. The research maps fintech concentration across 19 economies using a per capita metric, arriving at a weighted average of 14 companies per million people for the subregion.

On the surface, it is an impressive figure. Look closer, however, and the story becomes more nuanced.

The density figures are not simply an artefact of investor enthusiasm or regulatory permissiveness. They reflect something more fundamental: a large and underserved population that traditional banking has consistently failed to reach. Across markets including Indonesia, the Philippines, and Vietnam, significant portions of the adult population remain unbanked or underbanked, relying on informal financial systems for payments, credit, and savings.

Fintech companies operating on mobile-first platforms and alternative credit-scoring models have moved into that gap at considerable speed. The proliferation of digital wallets, buy-now-pay-later (BNPL) services, and peer-to-peer (P2P) lending platforms across the region speaks to genuine consumer demand rather than supply chasing a non-existent market.

Where traditional banks required branch infrastructure, credit histories, and formal employment records, fintech operators have found ways to serve customers who lack them.

Also Read: The US$80K Bitcoin wall: What happens next could define the next quarter

This dynamic matters because it distinguishes SEA from fintech markets, where density is primarily a function of regulatory arbitrage or institutional capital seeking returns. The underlying demand in this region is structural, tied to demographic scale, rising smartphone penetration, and decades of underinvestment in conventional financial infrastructure. That foundation gives the ecosystem a degree of durability that pure capital-driven booms typically lack.

One city is doing a lot of heavy lifting

It is important to note that a substantial portion of the statistics is driven by a single market: Singapore, which registers a density of 619 companies per million, by far the highest of any economy in the study.

Singapore’s position is the product of specific and largely unreplicable conditions. As a city-state with a sophisticated regulatory environment, deep capital markets, and a long-standing policy of attracting international financial services firms, it functions more as a regional headquarters hub than as a representative SEA market. Many of the fintech companies counted in its figures are operationally focused elsewhere in the region or globally, using Singapore primarily as a base for licensing, fundraising, and corporate structuring.

Strip Singapore out of the subregional calculation, and the weighted average would fall considerably. The remaining markets—each contending with fragmented digital infrastructure, varying regulatory maturity, and populations spread across thousands of islands and rural provinces—present a more modest picture.

Also Read: Nium bets on a future where stablecoins swipe like credit cards

Treating Singapore’s density as indicative of broader regional progress risks overstating how far the ecosystem has actually developed in the markets where most SEA residents live.

Apart from that, a high company count per capita says nothing about whether these companies are financially sustainable, adequately regulated, or genuinely serving their stated customer base. Fintech markets that expanded rapidly during the low-interest-rate environment of the early 2020s are now under pressure, with funding harder to secure and profitability timelines under greater scrutiny.

The consumer demand underpinning SEA’s fintech growth matters. But demand alone does not guarantee that the companies formed to meet it will survive long enough to deliver on their promise. As the sector matures, the more meaningful measure of progress will not be how many fintech firms exist per million people. It will be how many of them are still serving those people a decade from now.

The post SEA’s fintech boom: Market demand is real, but the numbers need context appeared first on e27.

Posted on Leave a comment

The foundation of Southeast Asia’s tech future

In the global technology landscape, the conversation around artificial intelligence is often dominated by the race for ever-larger models and the dazzling capabilities of generative applications. For many, AI is a feature—a new button to press, a smarter chatbot, an enhanced recommendation engine.

However, for the dynamic and rapidly digitising economies of Southeast Asia, this perspective is not just limiting; it is a fundamental miscalculation. To unlock the projected US$1 trillion in regional GDP uplift by 2030, the region’s startups, enterprises, and policymakers must embrace a more profound paradigm: AI as core infrastructure.

This is not merely a semantic distinction. Treating AI as a feature means bolting it onto existing systems, a superficial enhancement to legacy processes. Treating it as infrastructure means building the entire enterprise on a new foundation, reimagining workflows, business models, and value creation from the ground up.

For Southeast Asia, a region defined by its vibrant complexity, this infrastructural approach is not just an opportunity—it is a necessity.

The complexity advantage: A launchpad for global-ready AI

What makes Southeast Asia the ideal launchpad for the application layer of AI is the very fragmentation often cited as a business challenge. The region’s diversity across languages, cultures, and regulatory frameworks acts as a powerful forcing function, compelling founders to design for scale and adaptability from day one. This environment makes it nearly impossible to succeed with narrow, single-market solutions, inadvertently creating a generation of startups building inherently global-ready AI.

Several real-world problems unique to the region are proving to be fertile ground for this new breed of AI infrastructure companies:

“Being based in Asia is for us a very good starting point because most of the world’s business processes are actually outsourced to Asia in general. So we’re using that base as a foundation for building a global company.” — Christian Schneider, CEO, fileAI

This proximity to complex, real-world workflows provides an unparalleled advantage. While Western counterparts may theorise about enterprise automation, Southeast Asian startups are building it at the source, creating horizontal platforms capable of navigating the intricate realities of global business process outsourcing (BPO), cross-border compliance, and hyper-localised customer engagement.

Also Read: How are the companies you invest in leveraging AI? 

From AI-first to AI-native: A foundational shift

The most forward-thinking companies in the region are already moving beyond simply being “AI-first.” A recent study found that 29% of businesses across ASEAN have now adopted AI, a significant increase from 21% the previous year, marking a 38% year-over-year growth. More importantly, a strategic shift is underway from merely experimenting with AI to fundamentally re-architecting operations to be “AI-native.”

This transition requires what Carro’s COO, Zi Yong Chua, warns against avoiding: building “AI for AI’s sake.” Instead, it demands a focus on tangible business value and an enterprise-ready foundation built on precision, preparation, and people. It means focusing on narrow, high-value use cases that deliver immediate ROI, doing the hard groundwork of data preparation, and investing in talent. This shift is evident in the rise of indigenous and sovereign Large Language Models (LLMs), such as Thailand’s open-source Typhoon model, which are being developed to support local languages and reduce reliance on foreign tech stacks.

The physical infrastructure paradox

The concept of AI as infrastructure is not just a metaphor; it is a physical reality. The exponential growth in AI adoption is colliding with the hard constraints of energy and data centre capacity. A single rack of AI servers can consume 40–60 kW of power, a tenfold increase over traditional cloud computing racks. This has created an infrastructure paradox in the region.

Singapore, long the undisputed data hub of Asia, is running out of power. With data centres already consuming nearly seven per cent of the nation’s electricity, a moratorium was placed on new construction, only recently lifted for operators meeting the strictest sustainability standards. This has pushed demand across the border to Johor, Malaysia, which has rapidly become the region’s new hyperscale frontier, with abundant land and power to support the massive, liquid-cooled data centres required for AI workloads.

This Singapore-Johor corridor is a prime example of how physical infrastructure is shaping the future of AI, creating a cross-border digital ecosystem where data-intensive training and latency-sensitive inference are run in different sovereign territories.

Also Read: AI, seed-strapping, and the new playbook: Why customers are the best VCs

The future is horizontal

As the region’s AI maturity grows, the strategic imperative is shifting from siloed, vertical solutions to powerful horizontal platforms. The most valuable AI companies will not be those that solve one problem well, but those that provide the foundational building blocks for others to innovate upon. This approach, championed by companies like fileAI, focuses on creating proprietary AI components that allow users to construct and automate a multitude of complex workflows.

This platform-based model is the essence of AI as infrastructure. It democratises access to powerful capabilities, enabling a broader ecosystem of businesses to become AI-native without each having to build its own core models from scratch. It is a strategy that recognises that the true value of AI lies not in a single application, but in its ability to become a pervasive, foundational layer of the new digital economy.

For Southeast Asia, the path forward is clear. The startups, corporations, and governments that recognise and invest in AI as fundamental infrastructure—both digital and physical—will be the architects of the region’s future. The trillion-dollar opportunity is not in building more features, but in laying the rails for a new era of innovation.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post The foundation of Southeast Asia’s tech future appeared first on e27.

Posted on Leave a comment

The great stabilisation: Why 2026 will be the year AI “grows up”

We have spent the last three years in a storm of hype. Every week, a new model that promised to change the world; every month, companies scrambled to integrate whatever appeared to be the “next big thing.” But as we look toward 2026, the wind is changing. We are moving from the era of building the basics of AI to the era of living with it.

The conversation has moved away from how impressive the technology looks in a demo. What matters now is whether it delivers consistent, measurable value to a real human being. Here is my view on the seven major trends that will define our lives in 2026.

Software is no longer the “moat”, data is

For decades, building complex software was like building a castle. If you had the best code, you had the highest walls, and no one could touch you. That era is essentially over. In 2026, writing software will be trivial. AI can write production-ready code instantly. The “Moat” (your defensive business advantage) is no longer the app itself—it is the data inside it.

Imagine two companies launch a tennis coaching app. One has slightly better software; the other has 10 years of proprietary data on how professional athletes serve. In 2026, the second company wins instantly. Data, not software, is the new foundation of advantage.

AI moves off the screen, and into the world

AI is breaking free from the confines of the screen. We are entering an era of ‘presence-based’ hardware – devices are designed not just to respond, but to exist alongside us in specific environments. We are starting to see specialised AI hardware. Think of a small desk device that acts specifically as a “Doctor’s Assistant,” listening to patient symptoms and drafting notes securely.

By 2026, we will see them begin to converge into a new category of consumer hardware- something that might eventually challenge the smartphone itself. The new generation of devices will not simple compute on demand, they will be ambient, contextual and present.

Also Read: Bridging the last mile: How AI can transform agriculture, health, and education in SEA

Small is the new big (SLMs)

For a long time, the race was to build the biggest “Brain” possible (Large Language Models). This is giving way to a more pragmatic approach.

Giant, general-purpose systems are powerful, but they are also expensive, slow and difficult to control. The future belongs to smaller, specialised models trained to do one job exceptionally well. For instance, a bakery does not need AI that understands geopolitics. It needs someone who understands inventory, suppliers, and recipes. Small Language Models make AI systems easier to debug, easier to trust, and easier to compose. This allows multiple focused intelligences to work together.

The “agentic” factory

The way we build products is being redesigned from the ground up. The traditional development cycle of humans designing, coding and testing has already begun to erode. By 2026, teams will increasingly operate through fully agentic workflows.

Humans will define objectives and constraints. AI agents will design interfaces, write code, and attempt to break the system through automated testing. The human becomes the Architect, not the bricklayer. This will make software development faster and cheaper than we ever imagined.

Video becomes precise and controllable

Until now, AI-generated video has been impressive but unreliable. Small changes often produced unintended distortions, limiting serious adoption. In 2026, that changes. Advances in model precision are enabling object-level control within moving video. Creators will be able to modify a single element—such as the colour of a car—without affecting the rest of the scene. Video generation moves from novelty to utility, becoming a precise, surgical tool rather than an unpredictable experiment.

Also Read: The agritech challenge in Indonesia: Can AI and mobile apps enhance productivity?

Fighting the “slop”

The internet is flooding with AI-generated “slop”—low-quality, spammy content that feels like junk food for your brain. Social platforms are finally taking the gloves off. Expect aggressive new measures to filter out this low-effort noise. We will see a premium placed on human-verified reality. “Verified Human” might become the most valuable badge on the internet this year.

Protecting our minds

Perhaps the most sensitive frontier is psychological rather than technical. As AI companies become more conversational, empathetic and available, they can also become more addictive. Imagine an AI friend that knows exactly what you want to hear, 24/7. It is incredibly validating, but can be potentially manipulative.

2026 will be the year of regulation and ethical design. We will see features that prevent AI companions from becoming “digital sugar”—addictive and unhealthy. Just as we have warnings on physical products, we might start seeing “dependency warnings” on 9hyper-realistic AI chat apps. The goal will not be to eliminate companionship, but to ensure it remains healthy.

The verdict

2026 isn’t about AI becoming “smarter”. It is about AI becoming reliable, specific, and safe. It means we stop obsessing over the technology itself and start focusing on what really matters: human potential.

For business leaders, the takeaway is simple. Stop asking “How can we use AI?” Instead, start asking “what unique data do we own that no AI can replicate?” In a stabilised AI world, data, not the technology itself, will be the castle that will matter for the next decade.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post The great stabilisation: Why 2026 will be the year AI “grows up” appeared first on e27.