Posted on Leave a comment

What NVIDIA GTC 2026 reveals about the future of embodied AI

Artificial intelligence is stepping off the screen.

At NVIDIA GTC 2026, one theme stood out across announcements, demos, and developer activity: AI is moving beyond cloud-based software into robots, edge devices, and real-world environments.

This shift is driving the rise of embodied AI. And increasingly, these systems are being designed with one interface in mind: voice.

From models to machines

For the past few years, AI innovation has largely centred around models, with stronger reasoning and multimodal capabilities. But GTC 2026 signals a transition from models to machines.

Instead of asking what AI can generate, developers are now exploring what AI can do in real-world environments. This includes AI systems embedded directly into physical devices and environments.

This shift is enabled by the convergence of several layers:

  • Edge AI computing platforms like NVIDIA Jetson
  • Multimodal models capable of processing vision, audio, and text
  • Real-time infrastructure for interaction and response
  • Accessible hardware platforms for rapid prototyping

Together, these layers are turning AI from a passive tool into an active system. This shift is not just theoretical. It is already shaping how developers build and experiment with AI systems today.

The rise of developer-first robotics

One of the most notable ways this shift is materialising is through the emergence of developer-first robotics platforms.

These systems are not built solely for industrial deployment. Instead, they are designed to be programmable and modular, allowing developers to prototype embodied AI applications more easily.

NVIDIA’s Isaac platform continues to play a central role here, offering simulation and development tools that allow teams to train and test robotics systems before deploying them in the real world. Jetson-powered kits are also becoming a standard foundation for edge AI and robotics experimentation.

Alongside these, newer platforms are lowering the barrier to entry even further. Reachy Mini, an open-source humanoid robot developed by Pollen Robotics in collaboration with Hugging Face and integrated with Seeed Studio’s hardware ecosystem, is one such platform gaining attention.

Unlike traditional robotics systems, Reachy Mini is designed for interaction. It combines expressive movement, modular hardware, and compatibility with modern AI models, making it easier for developers to build embodied AI agents that can engage with humans.

Why Reachy Mini stands out

What makes Reachy Mini particularly relevant in the current wave of embodied AI is its focus on real-time, human-like interaction.

Reachy Mini

While many robotics platforms are still centred on automation or industrial tasks, Reachy Mini is designed for developers building interactive AI systems. This distinction has made it increasingly visible across GTC 2026 and its surrounding ecosystem events, where it was also highlighted during NVIDIA CEO Jensen Huang’s keynote.

Developers are using Reachy Mini alongside:

  • NVIDIA Jetson Orin Nano for edge AI computing
  • Multimodal models from platforms like Hugging Face
  • Speech and voice technologies for natural interaction

This combination enables a new class of applications where robots are not just executing predefined workflows, but continuously engaging with users in real time.

Instead of fixed tasks, these systems can:

  • Understand spoken input and intent
  • Process context using multimodal models
  • Respond instantly through voice, movement, or gestures

This reflects a shift in how robotics is designed, from task-based automation to adaptive, real-time interaction. In that sense, Reachy Mini is not just another robotics platform. It reflects a broader move toward developer-first, interaction-driven AI systems built for real-world environments.

Voice as the default interface

As AI moves into physical environments, traditional interfaces become limiting. You cannot rely on screens or keyboards in many real-world scenarios. Interaction needs to be immediate and hands-free.

This is where voice becomes critical.

At GTC, multiple demos and ecosystem collaborations highlight how voice is evolving from a feature into a core interface layer. In systems built on real-time conversational AI infrastructure, voice is not just used for commands, but for full real-time interaction.

Across emerging systems, several capabilities are becoming standard with capabilities such as:

  • Far-field audio capture for hands-free interaction
  • Speaker recognition for personalised responses
  • Wake-word activation for always-on systems
  • Real-time speech-to-speech interaction that feels conversational

In robotics setups such as Reachy Mini, this allows users to interact with machines more naturally, without needing structured prompts or predefined commands.

The result is a shift in how humans engage with AI. Instead of typing instructions or navigating interfaces, users can speak, listen, and interact in a way that mirrors human conversation.

As these systems become more reliable and widely deployed, voice is likely to become the primary way users interact with embodied AI.

Beyond robots: The expansion of voice-native devices

The implications of embodied AI extend far beyond humanoid robots.

At NVIDIA GTC 2026, there is a clear push toward voice-native edge devices powered by compact hardware and real-time AI pipelines. Instead of relying on cloud-only systems, developers are increasingly building AI that can operate directly on devices while maintaining real-time responsiveness.

One example comes from collaborations between companies like Agora and Seeed Studio, which are building voice-native edge systems that combine hardware, AI models, and real-time infrastructure.

Microphone array platforms such as Seeed Studio’s reSpeaker, powered by AI voice processors, are designed to capture voice input reliably even in noisy environments. When paired with edge AI computing and conversational AI engines, these systems can:

  • Capture voice input through far-field microphones
  • Process speech and reasoning in real time
  • Deliver responses with ultra-low latency

What makes this architecture notable is the continuous interaction loop it enables. Audio is captured on-device, transmitted through real-time networks, processed by AI systems for understanding and response, and streamed back almost instantly.

This creates a more seamless, always-on experience compared to traditional voice assistants. As a result, developers are starting to build voice-native systems across a wide range of applications:

  • Smart home devices that respond contextually to users
  • Conferencing systems with real-time transcription and interaction
  • AI assistants embedded directly into hardware
  • Robotics interfaces that enable natural human-machine communication
  • Industrial IoT systems that can be controlled and monitored through voice

The next interface for AI

If the past decade of AI was defined by screens and text, the next decade will be defined by interaction in the physical world. Voice is emerging as the interface that enables AI to operate seamlessly across environments.

What GTC 2026 makes clear is that embodied AI is no longer a distant concept. It is becoming a practical reality, shaped by advances in robotics, edge computing, and real-time interaction.

We are already seeing early signals from companies actively building in this space.

Figure AI is developing humanoid robots designed for real-world work environments, while 1X is focused on safe, human-centric robots for the home.

Tesla continues to push its Optimus robot as part of a broader vision of AI-powered automation, and Boston Dynamics is advancing mobility and autonomy in robotics through systems like Spot and Atlas.

Tesla Pushes Forward on Optimus Production, Musk Calls It “the Biggest – Hansshow

At the same time, Hugging Face is also playing a growing role by expanding open-source models into robotics, making it easier to combine perception, language, and action.

On the interface layer, companies such as Amazon and Google are evolving voice assistants beyond smart speakers into more context-aware, multimodal systems embedded across devices.

What connects these efforts is a shared direction: AI is becoming embodied, interactive, and continuously present.

FFIn the near future, interacting with AI may feel less like prompting a system and more like interacting with systems that can listen, respond, and act in real time. For builders and startups, the question is no longer whether this shift will happen. It is how quickly they adapt.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post What NVIDIA GTC 2026 reveals about the future of embodied AI appeared first on e27.

Posted on Leave a comment

Inside the next phase of AI-driven banking in Southeast Asia

Across Southeast Asia, banks and financial institutions are entering a new phase of digital transformation. Customers increasingly expect financial services to be available instantly, whether they are checking balances, resolving account issues, applying for loans, or interacting with support teams across mobile apps and messaging platforms.

Southeast Asia’s financial sector is also expanding rapidly as digital adoption accelerates across the region. According to Google, Temasek and Bain & Company’s e-Conomy SEA 2023 report, the region’s digital economy is projected to reach US$600 billion in gross merchandise value by 2030, with financial services and digital payments playing a central role in that growth.

Artificial intelligence is emerging as one of the key technologies helping banks navigate this transition. McKinsey estimates that AI could generate up to US$1 trillion in additional value annually for the global banking industry, driven by improvements in customer engagement, fraud detection, and operational efficiency.

In particular, conversational AI is gaining traction as institutions look to automate routine customer interactions, support digital onboarding processes, and provide real-time assistance across voice, chat, and video environments.

The rise of conversational banking

Conversational AI has rapidly become a central component of modern digital banking strategies. Deloitte notes that the majority of customer interactions with banks now occur through digital channels such as mobile apps and messaging platforms, increasing the need for scalable automated support systems.

Traditionally, banks relied heavily on call centres and human agents to handle customer enquiries. While effective, these systems often struggled to keep up with growing volumes of customer interactions, particularly as mobile banking adoption surged across Southeast Asia.

AI-powered conversational systems are now helping financial institutions automate many of these routine tasks. Virtual assistants can respond to frequently asked questions, guide users through account services, and provide real-time support through messaging platforms and mobile applications.

In many cases, these systems operate in hybrid environments where AI handles initial interactions while human agents step in for complex issues. This allows banks to improve response times while ensuring customers still receive personalised assistance when needed.

For financial institutions operating in multilingual markets like Southeast Asia, conversational AI also helps scale customer engagement across languages and regions while maintaining consistent service quality.

Also Read: A new era of automation: Establishing best practices for intelligent automation and generative AI

A partnership targeting Southeast Asia’s financial sector

Across Southeast Asia, banks and financial institutions are increasingly exploring conversational AI to improve customer engagement and operational efficiency.

As demand grows for faster, more responsive digital banking experiences, technology providers and system integrators are forming partnerships to help financial institutions deploy AI-driven interaction systems at scale.

Similar collaborations are emerging across the global financial technology ecosystem. Banking software provider Temenos, for example, has partnered with Microsoft to integrate AI capabilities into digital banking platforms, enabling financial institutions to automate customer engagement and improve operational efficiency.

Another example in Southeast Asia is a collaboration between real-time engagement technology provider Agora and Vietnam-based IT services and digital transformation company FPT, aimed at accelerating conversational AI adoption among banks and financial institutions across the region. By combining Agora’s real-time engagement and conversational AI capabilities with FPT’s enterprise integration expertise, the collaboration supports digital banking interactions across voice, chat, and video channels, enabling workflows such as customer support, payment enquiries, lending interactions, insurance onboarding, and multilingual customer engagement across regional markets.

Real-world deployments across the banking sector

Enterprise adoption of conversational AI within financial services is already gaining momentum.

In Singapore, DBS Bank has deployed AI-powered virtual assistants across its digital channels to handle routine customer enquiries, helping reduce response times while allowing human agents to focus on more complex financial services. OCBC Bank has taken a similar approach with its AI-powered chatbot “Emma”, which assists customers with home loan and banking enquiries through digital platforms.

In Vietnam, Sacombank has implemented AI voice agents as part of a next-generation AI contact centre initiative. The deployment increased call handling capacity by more than 58 per cent and allows the system to manage up to 41,000 calls per day, improving service responsiveness while enhancing overall customer experience. 

Similarly, Vietcombank uses Intelligent Virtual Assistant VCB Digibot across messaging channels to answer common customer enquiries related to loans, cards, interest rates, promotions, and currency exchange information. By automating routine requests, bank staff can focus more on complex customer needs and advisory services. 

Another example comes from Home Credit Vietnam, which uses AI voice agents to automate large volumes of call centre interactions each month while maintaining consistent service quality across its customer operations. 

These deployments illustrate how conversational AI can improve operational efficiency while also helping financial institutions handle rapidly growing interaction volumes.

Also Read: Why the AI revolution depends on reinventing energy infrastructure

Balancing innovation with trust and compliance

While AI-driven automation offers clear efficiency benefits, financial institutions must also navigate increasingly complex regulatory environments.

Across Southeast Asia, banking and financial services organisations operate under strict frameworks governing data protection, electronic systems, and consumer safeguards. Any new digital infrastructure must therefore meet rigorous standards for security, privacy, and operational resilience.

Solutions built for the sector must be designed to operate within these regulatory boundaries while still delivering real-time engagement capabilities. For example, Singapore’s Monetary Authority of Singapore (MAS) has introduced technology risk management guidelines that require financial institutions to ensure robust cybersecurity, system resilience, and responsible use of emerging technologies when deploying digital services.

These frameworks highlight the need for AI-powered banking solutions to balance innovation with strong governance, ensuring that automation improves customer experience without compromising regulatory compliance.

The next phase of digital banking in Southeast Asia

Looking ahead, conversational AI is likely to play a growing role as financial institutions across Southeast Asia modernise their digital infrastructure.

Financial institutions are also accelerating the use of artificial intelligence. According to McKinsey’s State of AI report, financial services is among the industries seeing the fastest growth in AI adoption.

Across the region, this shift is becoming visible in how banks manage customer interactions at scale. In Thailand, for example, Kasikornbank has expanded the use of AI across its digital banking services to support automated customer support and personalised recommendations within its mobile banking ecosystem.

Deploying conversational AI in financial services, however, requires more than new software. Banks must integrate real-time communication infrastructure, enterprise AI platforms, and secure data systems while operating within strict regulatory frameworks. As a result, partnerships between AI platform providers, real-time engagement infrastructure companies, and enterprise technology integrators are becoming increasingly important. These collaborations help bridge the gap between emerging AI capabilities and the operational realities of large financial institutions.

For banks facing rising customer expectations and growing operational complexity, the ability to deliver secure, intelligent, and responsive real-time interactions may become a defining factor in the next phase of Southeast Asia’s banking transformation.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Inside the next phase of AI-driven banking in Southeast Asia appeared first on e27.

Posted on Leave a comment

Data minimisation vs AI context maximisation: The battle defining the future of smart systems

AI product teams are under constant pressure to make systems more accurate, more personalised, and more “helpful.” The simplest path is obvious: give the model more context. Ingest more documents. Retain more history. Build long-term memory. Expand what the assistant can see, and performance usually improves.

But privacy regimes and privacy expectations push in the opposite direction. Data minimisation, purpose limitation, and collection restriction are not abstract ideals. They are the principal regulators that customers rely on to keep data usage bounded and accountable.

This creates a direct design conflict: the incentives that make AI feel smarter are often the same incentives that make privacy controls weaker.

The right question isn’t “which side wins.” It’s how to build AI systems that improve without defaulting to maximal collection.

Why is this tension structural, not philosophical

In traditional software, minimisation is easier to align with product goals. You collect the fields you need for a feature, you store them for a defined purpose, and you can often explain why each piece of data exists.

AI is different because value comes from correlation and context. Models are better when they can connect fragments across time, across systems, and across interactions. Personalisation improves when the system remembers. Retrieval improves when the corpus is large. Assistance improves when the model sees the full picture.

Teams begin with a narrow scope, then expand it for quality. A support copilot starts with ticket history, then wants CRM data, then wants billing context, then wants internal notes. A productivity assistant starts with documents, then wants email, then wants a calendar, then wants chat logs. Each step can be justified as “improving user experience.”

Individually, these expansions look reasonable. Collectively, they turn an assistant into an always-on observer.

Also Read: Balancing ambition and well-being: A founder’s take on sustainable company building

Data minimisation is not anti-AI; it is pro-boundaries

Minimisation is often misunderstood as “collect less, at any cost.” In practice, it is a boundary principle. It forces organisations to answer three questions clearly.

  • What data is required for this feature?
  • What purpose does it serve?
  • How long do we need it?

AI teams struggle with these questions because the benefits of extra data are often real, but diffuse. More history can improve outcomes in unpredictable ways. More context can reduce edge case failures. More ingestion can make answers more complete.

But that uncertainty is exactly why minimisation matters. If you cannot clearly define why you need a dataset, you are not making a product decision. You are building optionality at the expense of privacy.

How “context maximisation” quietly expands risk?

The privacy risk is not only about what you store. It is also about what you expose and how broadly it can be inferred.

When AI systems ingest broad corpora, they create new pathways for leakage. Users can receive summaries that reveal sensitive details they were never shown directly. Assistants can surface internal information through conversational queries. Models can retain fragments of sensitive text in ways that are hard to reason about operationally.

Long-term memory features introduce a different category of risk: the system remembers things users did not intend to persist, and those memories can resurface out of context. Even when memory is user-facing and configurable, it changes the default posture from “ephemeral interaction” to “persistent profile.”

There is also a governance risk. The more systems you connect, the harder it becomes to explain data flows. When a user asks, “Where did the assistant get that?” the answer needs to be more than “It had access.”

Performance metrics reward collection

This tension becomes sharper because performance is measurable and privacy degradation is often invisible until it is not.

AI teams can track accuracy, resolution time, customer satisfaction, deflection, and engagement. They can show improvements when they add more context. Those wins are immediate and quantifiable.

Also Read: AI agents could become the new OTAs — What it means for Agoda and the future of travel

Privacy risks are delayed and probabilistic. They appear as edge incidents, customer discomfort, regulatory scrutiny, or an erosion of trust that is hard to attribute to one design choice. This leads to a predictable outcome: teams optimise what they can measure.

If you want minimisation to hold, you have to make privacy constraints visible and product-relevant, not just a review step at the end.

Reframing the problem as “context precision”

The practical way forward is to shift from context maximisation to context precision.

Context precision means the system gets the right context for the task, not all context that exists. It treats data access as a targeted operation, not a broad entitlement.

This starts with task-based scoping. What does the assistant need to do right now? Draft a reply. Summarise a document. Recommend next steps. Each task has a minimum viable context. Build around that minimum first, then expand only with explicit justification.

It also requires separating retrieval from retention. Many systems conflate “the model needs access” with “we should store it.” In reality, the assistant can fetch context when needed without permanently retaining it. Not every useful piece of data needs to become part of a long-term memory layer.

Design patterns that reduce conflict

A few patterns consistently help reconcile performance with privacy.

Make context opt-in and visible. If the assistant is going to use email history or calendar content, make that a clear user decision, not an implied default. Users tolerate data use better when it is transparent and controllable.

Use short-lived, purpose-bound context windows. Instead of giving the assistant broad, continuous access, provide time-bounded slices aligned to the task. This improves relevance while limiting exposure.

Prefer selective retrieval over bulk ingestion. Build retrieval mechanisms that pull only what is needed, rather than indexing everything “just in case.” This reduces both the attack surface and the risk of accidental cross-context leakage.

Separate sensitive classes of data into stricter zones. Some data can be used for convenience features with minimal risk. Other data should require higher assurance and tighter policies. Treat “what the assistant can see” as a tiered model, not a single permission.

Treat memory as a product contract. If you introduce long-term memory, define what can be remembered, how it is edited, how it expires, and how users can inspect it. Memory without clear controls becomes a persistent privacy liability.

Build “privacy cost” into AI evaluation. If a model improves with more context, measure the tradeoff explicitly. The question becomes: what incremental performance did we gain, and what additional data exposure did we introduce? When teams are forced to articulate that exchange, minimisation stops being abstract.

Also Read: Why startups fail at offshore expansion (and how to fix it)

Purpose limitation is the hardest line to hold

Purpose limitation is where most AI systems struggle. Data collected for one purpose becomes attractive for another.

A dataset gathered to improve support responses becomes a training corpus. Logs captured for debugging become long-term analytics. Conversations intended to be ephemeral become personal memory.

The danger is not malice. It is reused for convenience.

The only reliable defence is governance that is enforceable in architecture, not just policy. If the system cannot technically access data outside a purpose boundary, the boundary holds. If it can, the boundary will eventually erode.

The most practical path is not extreme minimisation or extreme maximisation. It is precision: giving AI the context it needs for a specific task, for a defined purpose, for a bounded period, with user-visible control and auditable data flows.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Data minimisation vs AI context maximisation: The battle defining the future of smart systems appeared first on e27.

Posted on Leave a comment

Beyond the US$70K level: Why Bitcoin’s real test isn’t price yet

Bitcoin’s ability to hold above US$70K while ETF outflows cooled provided the essential foundation. The Fear and Greed Index resting at a neutral 45 signalled neither panic nor euphoria, conditions that often precede sharp reversals. This equilibrium allowed capital to rotate with confidence into broader crypto assets without the spectre of a Bitcoin-led collapse hanging over traders. I see this stability as evidence that the market now prices in institutional participation without becoming enslaved to it. Bitcoin steadies, and the ecosystem breathes.

Bitcoin’s resilience functioned as more than a price level. It served as a psychological anchor for a market still learning to decouple from traditional finance while remaining tethered to macroeconomic currents. When Bitcoin steadies above critical support, it creates space for experimentation and risk-taking elsewhere in the ecosystem. The fact that this stability occurred amid ongoing ETF flow volatility demonstrates that institutional participation, while influential, no longer dictates every intraday move.

Retail and sophisticated derivatives traders alike interpreted Bitcoin’s strength as a green light to explore opportunities beyond the largest-cap assets. This dynamic underscores a healthy evolution where Bitcoin serves as digital gold and market bellwether without stifling innovation in adjacent protocols and tokens.

The rally’s amplification came from two interconnected forces. First, speculative capital chased explosive moves in low-capitalisation tokens. Alaya Governance Token surged 94.5 per cent while RaveDAO climbed 235.4 per cent , gains fuelled by derivatives activity and social media momentum. These moves reflect a familiar pattern where risk appetite returns, capital seeks asymmetric opportunities, and narratives form around emerging projects.

Also Read: Bitcoin’s US$70K rejection was no accident: What the charts say about tonight’s Iran decision

Second, and equally important, crypto maintained a 92 per cent correlation with the Nasdaq-100 ETF, QQQ. This tight linkage means digital assets continue to ride the same macro waves as technology equities, particularly sensitivity to interest rate expectations and liquidity conditions.

On April 10, 2026, US markets extended gains with the S&P 500 rising 0.62 per cent to 6,824.66, the Nasdaq Composite advancing 0.83 per cent to 22,822.42, and the Dow Jones Industrial Average adding 0.58 per cent to close at 48,185.80. The VIX volatility index fell 7.37 per cent to 19.49, signalling reduced anxiety among equity traders. Crypto’s participation in this broader risk-on move was not coincidental but structural.

This correlation cuts both ways. When macro sentiment improves, as it did on hopes of geopolitical de-escalation in the Middle East and steady labour market data, crypto benefits from the same liquidity flows that lift technology stocks. This linkage also means crypto remains vulnerable to shifts in Federal Reserve policy or unexpected economic data. The projected advance in CPI inflation data looms as a potential catalyst for volatility.

Commodity markets reflected similar crosscurrents, with US crude settling near US$98 per barrel amid hopes of a de-escalation, while Brent crude held at US$96.71. Gold rose to US$4,790.90 per ounce as a hedge against uncertainty, and the US Dollar Index slipped 0.51 per cent to 99.13, providing modest tailwinds for risk assets, including crypto. For those of us who believe in the long-term promise of decentralised systems, this macro tether represents both a reality of the current transition period and a reminder that true independence for digital assets requires deeper structural decoupling.

Also Read: Bitcoin holds US$71K as Ethereum surges 15%: What’s driving the US$2.44T crypto rally

The market faces a clear inflexion point. Technically, the total crypto market capitalisation confronts resistance at the 23.6 per cent Fibonacci retracement level of US$2.49T. The seven-day Relative Strength Index reading of 80.72 suggests short-term overbought conditions that often precede consolidation or pullbacks. Bitcoin’s ability to hold above US$70K remains the primary support for the broader complex. A sustained break above US$72K could reignite bullish momentum across altcoins. A failure to hold US$70K might trigger a retreat toward the US$2.39T support zone.

Beyond price levels, regulatory developments warrant close attention. The SEC’s CLARITY Act roundtable scheduled for April 16 could provide clarity or confusion depending on the tone and substance of discussions. From my perspective, having engaged with policymakers on blockchain frameworks, I view regulatory progress as essential for sustainable growth, but I remain sceptical of approaches that prioritise control over innovation.

The current market posture warrants cautious optimism. Bitcoin’s foundational strength, combined with speculative enthusiasm in altcoins, creates a constructive backdrop. The confluence of technical resistance, overbought signals, and macro uncertainty demands discipline. For investors and builders alike, this environment rewards selectivity.

Projects with genuine utility, transparent tokenomics, and active communities are better positioned to withstand volatility than those riding pure speculation. The 92 per cent correlation with tech equities reminds us that crypto does not operate in a vacuum. Liquidity conditions, rate expectations, and geopolitical developments will continue to influence price action in the near term. The longer arc points toward gradual decoupling as digital asset infrastructure matures and use cases expand beyond financial speculation.

Mainstream narratives often oversimplify crypto market moves as mere risk-on or risk-off plays. The reality proves more nuanced. Bitcoin’s resilience above US$70K despite ETF outflows suggests underlying demand that transcends short-term flow data. The explosive moves in tokens like RaveDAO reflect the enduring appeal of asymmetric opportunities in emerging ecosystems.

These gains occur within a macro framework that remains rate-sensitive. This duality defines the current moment. Traders must navigate technical levels and sentiment indicators while keeping one eye on Federal Reserve communications and geopolitical developments. Builders must focus on creating real value that can sustain projects beyond the next market cycle.

Also Read: Bitcoin and Ethereum officially commodities: How the 91% S&P correlation signals a new era

The path forward likely hinges on whether Bitcoin can convert its current stability into decisive upward momentum. A break above US$72K with conviction could propel the total market cap toward the US$2.49T resistance. Success at that level would signal a shift from cautious accumulation to broader participation.

Failure to clear these hurdles might see capital rotate back into Bitcoin as a relatively safe haven within crypto or into traditional assets if macro headwinds intensify. ETF flow data will remain a crucial gauge of institutional sentiment, particularly after a rally that has pushed short-term indicators into overbought territory. Like I said yesterday, the April 16 regulatory roundtable could serve as a catalyst if it produces constructive dialogue, or as a source of volatility if expectations diverge sharply from outcomes.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Beyond the US$70K level: Why Bitcoin’s real test isn’t price yet appeared first on e27.

Posted on Leave a comment

The hidden risk in AI adoption: Unchecked agent privileges

The deepest argument in “The AI Agent Governance Gap” report by US-based API management company Gravitee is not really about AI hype, or even security budgets. It is about identity.

More precisely, it is about the fact that most enterprises still do not treat AI agents as independent digital actors within their security model, even though those agents can read, write, trigger, and transact across core systems.

That omission sounds technical. It is actually foundational. The report says fewer than 22 per cent of enterprises treat AI agents as first-class security identities. It also says 60 per cent still rely on legacy authentication patterns designed for human workflows, including session management and password-based approaches that make little sense for autonomous software. Add in the finding that 86 per cent do not enforce access policies for AI identities at all, and the result looks less like a governance gap and more like a missing layer in the architecture.

Also Read: AI agents are already inside your systems, but who’s controlling them?

For Southeast Asia’s enterprises, this should be a flashing red light. The region is building increasingly API-heavy businesses: digital banks, super apps, regional e-commerce platforms, supply-chain networks, healthtech systems, and public digital services. AI agents are being introduced into precisely these environments because they can switch between tools quickly. But that also means they can quickly accumulate privileges, often by inheriting credentials from the applications or service accounts around them.

Borrowed badges are not good enough

Most enterprises are still comfortable with two main identity categories: humans and machine accounts. Human accounts belong to employees. Machine accounts belong to applications or services. AI agents do not fit neatly into either box.

An AI agent is not merely an application process. It may take natural-language instructions, decide which tools to call, reason across multiple steps, escalate or delegate subtasks, and adapt its behaviour to context. Giving that kind of entity a generic service account is like issuing a blank company pass to a visitor and hoping common sense does the rest.

That is the structural weakness Gravitee is highlighting. If an agent borrows the identity of its parent system, security teams cannot easily distinguish what the system did from what the agent did. They cannot apply a tailored policy. They cannot limit access cleanly by task or time window. They cannot generate a clean forensic record if something goes wrong.

In Southeast Asia, this problem is magnified by enterprise sprawl. Large regional companies often operate shared services across several countries, with integrations built over the years by different teams and vendors. Service accounts are already hard to track. When AI agents start riding on top of those accounts, visibility degrades further.

Why token scope suddenly matters a great deal

The report points towards a more modern security approach: structured provisioning, scope-limited authorisation, contextual decision-making, continuous monitoring, and audit trails that survive forensic scrutiny. In practical terms, that means every agent should have a clearly defined owner, a lifecycle, a limited set of authorised resources and a way to prove why it was allowed to act.

This is where standards and policy models start to matter. Gravitee references OAuth 2.1, resource indicators from RFC 8707 and fine-grained authorisation models such as attribute-based access control and relationship-based access control. Stripped of jargon, the idea is straightforward: a token issued to an agent should be narrowly scoped to the exact resources and operations it needs, for the shortest practical duration, with policy checks happening at runtime.

That matters because agents are not static users. They are dynamic callers. A finance agent may need read-only access to invoices but no permission to approve payment. A support agent may retrieve customer history, but should not be able to alter refund rules. A procurement agent may query supplier data in one jurisdiction but not exfiltrate it into another system or region.

Without those boundaries, enterprises are effectively granting AI agents the corporate equivalent of all-area backstage passes.

Southeast Asia’s API economy makes this urgent

This identity issue is not a niche concern for security architects. It sits directly in the path of Southeast Asia’s digital economy. The region’s leading companies are heavily API-driven, and many are building around orchestration rather than monolithic software stacks. Payments talk to fraud systems. Commerce platforms talk to logistics providers. Internal dashboards talk to data pipelines. Customer service tools talk to CRMs and knowledge bases.

Also Read: It’s not the chatbot but the access: Why AI agents are the real threat

AI agents thrive in these environments because APIs are precisely how they take action. The more connected the business, the more useful agents become. But usefulness without identity discipline is a recipe for hidden privilege.

This should concern sectors beyond pure tech. Banks deploying internal AI assistants, hospitals experimenting with clinical workflow tools, manufacturers using autonomous planning systems and public agencies digitising citizen services all face the same core question: is the agent acting under its own identity, or is it effectively piggybacking on somebody else’s authority?

If the answer is the latter, governance will always be weaker than leadership assumes.

Discovery is becoming the first security control

One telling detail in the report is where CISOs say they would invest if money were not a constraint. Some 73 per cent prioritised API and workload identity discovery and inventory, while 68 per cent focused on continuous monitoring and posture analytics. That is revealing. Security leaders are not asking for shinier dashboards because they are bored. They are asking because they do not know what identities already exist in their environments.

This is a particularly relevant issue in Southeast Asia, where outsourced development, cloud migration and rapid business expansion often leave identity estates fragmented. Companies may have one set of rules for workforce access, another for developer access, a different one for legacy applications and almost none for non-human agents. That fragmentation is manageable until AI agents start hopping between layers.

At that point, identity inventory becomes the prerequisite for everything else. If an organisation cannot enumerate its AI agents, trace their permissions and map their ownership, then access policy is theatre.

The next generation of IAM will be judged by how it handles agents

Identity and access management vendors often talk about zero trust, least privilege and continuous verification. AI agents are the stress test for whether those ideas can survive contact with real enterprise automation.

The hard truth is that many current IAM implementations were not built for autonomous actors that generate tool calls, request tokens, move across contexts and perform chained operations at machine speed. That does not mean enterprises must rip everything out. It means they need to extend identity thinking beyond employees and servers.

For Southeast Asian organisations, the prize for getting this right is significant. Companies that can issue scoped, observable, revocable identities to AI agents will be able to automate more confidently across borders, business units and regulated workflows. Those that cannot will remain trapped in a cycle of cautious pilots, brittle integrations and periodic security panic.

The enterprise AI debate often fixates on model performance. But the bigger competitive question may be simpler: can your organisation tell who the agent is, what it is allowed to do and why it was allowed to do it?
If not, the system is not truly governed. It is merely busy.

The post The hidden risk in AI adoption: Unchecked agent privileges appeared first on e27.