Posted on Leave a comment

Secondary sales in SEA: The liquidity lifeline when exits are scarce

During Southeast Asia’s fundraising boom, oversubscribed rounds were common and later-stage investors often wanted larger allocations than primary rounds could accommodate. Secondary share sales were sometimes the answer. Now, funding continues to stall, companies have reached meaningful scale, but exits remain limited. In this environment, secondary share sales can remain an important tool, but instead of providing early liquidity to investors before a full exit.

What is a secondary sale?

A secondary sale occurs when existing shareholders sell some or all of their shares before the company has a full exit. This differs from a primary issuance, where a company issues new shares and receives the investment monies.

A secondary sale is legally a transaction between selling shareholders and the buying investor. The company is not a direct party, but it is almost always involved because:

  • Transfer restrictions in shareholder agreements are common.
  • Board approval is often required.
  • The company may need to address the liability gap (explained below).
  • Governance rights may need to be updated after early investors sell down.

The liability gap

Secondary sales often occur alongside a new funding round, especially when the round is oversubscribed or existing shareholders want to avoid further dilution. This blended structure creates additional legal and commercial complexity. The liability gap is one of the most important issues in a secondary transaction.

By way of example, incoming investors commit US$30 million to a company:

  • US$10 million goes into the company (primary issuance).
  • US$20 million goes to early investors (via the secondary).

If the entire US$30 million had been a primary issuance, the company would typically be liable for warranties to investors up to the full amount.

Also Read: Do you need to rethink your startup fundraising strategy?

But in a mixed deal, the company only receives US$10 million, while selling shareholders receive the other US$20 million. Those sellers, especially VCs, are unlikely to take on full business warranties for the US$20 million of shares being sold. Institutional investors selling shares often only give title and capacity warranties, not full business warranties.

A liability gap, therefore, emerges between what the incoming investors expect and what sellers are willing to cover. This is usually resolved in the following ways:

  • Incoming investors accept reduced warranty coverage.
  • The company agrees to cover some exposure, even though it only received part of the funds.
  • Investors rely on the commercial reality that large warranty claims are rare and accept the lower coverage.

Restrictions and governance implications

Companies undertaking a secondary transaction will have governance documents in place – shareholders’ agreements, constitution, etc. These typically include rights of first refusal (ROFR), tag‑along/co‑sale rights, and board or shareholder approval requirements. Almost certainly, a series of waivers will be required before a secondary sale can proceed alongside the approvals for the fundraise.

Also Read: Mastering the art of fundraising: Winning strategies to engage investors

If an early investor sells down significantly, the company may also need to revisit items such as board representation, veto rights, reporting rights and other investor rights. These rights may no longer be appropriate for a shareholder with a much smaller stake.

Different share classes and liquidation preferences

Cap tables often involve multiple share classes with different rights. When an incoming investor acquires shares through both primary and secondary transactions, they may end up with:

  • A new class of preferred shares (from the primary issuance), and
  • An older class, which may even be ordinary shares (purchased from existing shareholders).

If the investor wants identical rights across all of their shareholding, especially liquidation preferences, the company may need to consider reclassifying shares or buying back shares and reissuing the new class to align their rights.

In conclusion

Secondary sales are already a feature of Southeast Asia’s startup ecosystem, providing some liquidity when a full exit is still far off. But they also introduce complexity, with transfer restrictions, warranty and liability allocation, governance matters, and share‑class alignment all needing careful consideration.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Secondary sales in SEA: The liquidity lifeline when exits are scarce appeared first on e27.

Posted on Leave a comment

It’s not the chatbot but the access: Why AI agents are the real threat

Every technology boom produces its own version of unauthorised adoption. Cloud had it. SaaS had it. Messaging apps had it. Now, AI agents are doing it at machine speed.

That is one of the most explosive threads running through the US-based API management company Gravitee’s “AI Agent Governance Gap” report. It argues that the real AI security problem is no longer hypothetical misuse but ungoverned deployment already underway within the enterprise.

The report says 75 per cent of organisations have discovered unsanctioned AI tools already running in their environments. Gravitee’s own survey data adds another damning metric: only 14.4 per cent of organisations have achieved full IT and security approval for their entire agent fleet. If shadow IT used to creep in through departmental software subscriptions, shadow AI is charging in through copilots, browser tools, API wrappers, open-source models and workflow automations that can be spun up in days.

Also Read: AI agents are already inside your systems, but who’s controlling them?

Southeast Asia is especially exposed because its digital businesses run on speed, improvisation, and distributed decision-making. That is not a criticism. It is part of why the region produces agile startups, resilient consumer platforms, and scrappy enterprise teams. But the same traits that drive innovation also make it easy for AI tools to bypass official channels. A product lead in Jakarta, a growth team in Manila, or a developer unit in Ho Chi Minh City does not need a six-month procurement cycle to start using AI. They need a company card, an API key, and a reason.

Friction is the mother of shadow adoption

One of the most useful insights in the report is brutally simple: shadow AI is a rational response to organisational friction. The white paper quotes journalist Jane Wakefield, who says, “Business leaders want to move quickly with AI. However, with different tools, different models, and different rules, it can be hard to have a clear picture of where data is going or how decisions are being made.”

That line lands because it describes a very familiar corporate pattern. Approved tools are slow to procure. Security review takes time. Legal wants data clauses. Compliance wants records. The business unit wants results this quarter. So the team finds a faster route.

This is not usually sabotage. It is an incentive design. Employees are judged on output, speed, and innovation. If the approved path to AI is painful, the unapproved path becomes attractive.

In Southeast Asia, that logic is amplified by competitive pressure. Startups are trying to conserve headcount while increasing output. Large enterprises are under pressure to automate customer support, sales operations, fraud detection, procurement and internal knowledge work.

Regional conglomerates are pushing digital transformation into subsidiaries with very different levels of technical maturity. In all of those environments, an AI tool that promises faster decisions or lower labour intensity can spread before governance catches up.

The real risk is not the chatbot. It is the connection

The public conversation around shadow AI often gets stuck on employees pasting sensitive text into consumer chatbots. That is a problem, but it is no longer the whole problem. The bigger enterprise risk emerges when unsanctioned AI tools are connected to internal systems.

An AI assistant with read access to a Slack workspace is one thing. An AI agent with delegated access to a CRM, document repository, billing dashboard, or cloud admin console is something else entirely. Once those connections exist, shadow AI stops being a data leakage issue and starts becoming an operational control issue.

Also Read: When tools start acting for you: The hidden cost of shadow IT

The report warns that these tools can arrive with embedded credentials or elevated system access that security teams do not even know exists. That observation should resonate across Southeast Asia, where many companies depend on external agencies, implementation partners and loosely documented integrations. In fast-moving businesses, access is often granted to “just get it working”. Later, nobody is entirely sure which tool is calling what.

That creates a dangerous asymmetry. Business teams see productivity gains immediately. Security teams see the underlying exposure only after an incident, an audit finding or a suspicious log pattern. By then, the tool may already be part of a critical workflow.

The region’s startup culture makes this even harder to police

For a pan-Asia tech audience, the uncomfortable truth is that startup culture itself can nurture shadow AI. Founders prize initiative. Engineers are rewarded for solving problems without bureaucracy. Growth teams experiment first and document later. That is often a strength. It is also how invisible dependencies get created.

Imagine a sales team using an AI agent to summarise leads, enrich account data and draft outreach. Then it gets connected to HubSpot or Salesforce. Then it gains access to internal pricing sheets. The customer success team then follows the same workflow. Six months later, the company has an undeclared AI layer sitting between staff and core customer systems.

Nothing about that progression sounds dramatic while it is happening. That is precisely why it is dangerous.

The problem is even more acute in Southeast Asia because many companies are managing multilingual operations, fragmented vendor stacks, and regional expansion simultaneously. A single shadow AI deployment can touch data subject to Singapore’s PDPA, Indonesia’s personal data law, Vietnam’s privacy rules or sector-specific controls in financial services. The compliance exposure is no longer local. It is distributed.

Security teams are losing the race to discover what exists

Gravitee’s broader research found that 88 per cent of organisations confirmed or suspected security incidents this year were related to agent security. Read alongside the 75 per cent shadow AI figure, the message is blunt: enterprises are not merely struggling to secure authorised AI. They are struggling to discover unauthorised AI before it matters.

This is why “approval gap” may become one of the most important phrases in enterprise AI. Many governance discussions focus on policy design. But before policies can be enforced, organisations have to know which agents, tools and workflows are already active. That sounds basic. It is not.

Also Read: AI systems as policy executors without policy clarity

Discovery is hard because AI adoption is now decentralised. Teams can access public models directly, use embedded AI features in SaaS products, deploy open-source models on cloud infrastructure or build wrappers around multiple providers. Some tools look like standalone apps. Others are merely features hiding inside software the company already uses. The sprawl is astonishingly easy to underestimate.

The cost of being slow is now higher than the cost of being wrong

There is a strategic twist here that many leaders have not internalised. In the past, central technology teams could often slow adoption in the name of control. In AI, that strategy backfires. If the secure path is significantly slower than the insecure path, business units will route around it.

That means the winning governance model is not simply stricter. It has to be faster, clearer and easier to use than shadow alternatives. This is particularly relevant in Southeast Asia, where businesses operate in highly competitive markets with thin margins and relentless pressure to move. Governance that adds friction without adding usable infrastructure will be ignored.

The lesson from the report is not that organisations should crack down theatrically on every unauthorised tool. They need to make compliant AI access genuinely convenient. If official channels are slow, shadow AI will keep winning.

The next era of enterprise AI security will not be defined by who writes the toughest policy. It will be defined by who builds the fastest trustworthy route from business need to approved deployment. In a region that values execution, that may be the only governance model with any chance of survival.

The post It’s not the chatbot but the access: Why AI agents are the real threat appeared first on e27.

Posted on Leave a comment

Networking was the topic, alignment was the outcome

Most people don’t have a networking problem. They have an environmental problem.

Networking remains one of the most talked-about skills in business, yet the way it is commonly approached has barely evolved. The prevailing advice still centres around doing more — attending more events, meeting more people, expanding reach.

But after hosting a recent event focused on networking, one thing became clear:

The issue isn’t that people don’t know how to network. It’s that they are doing it in the wrong environments.

The persistence of a broken model

Traditional networking is built on volume.

The assumption is simple: the more people you meet, the more opportunities you create. This often results in rooms filled with introductions, surface-level conversations, and an underlying pressure to make every interaction “worth it”.

In practice, this creates the opposite effect.

Conversations become transactional. Follow-ups are inconsistent. Most connections never move beyond the first meeting.

As Kelly Kam, Co-Founder of Speakers Society and Co-Creator of the KellyK Authentic Networking OS, puts it: “Most people still think networking is about collecting contacts… Trust is the real currency.”

The emphasis on volume over continuity is where most networking efforts break down.

Also Read: Networking is expanding, but execution still lags

What practitioners are actually seeing

Across founders, creators, and operators, a different pattern is emerging.

Gayathri Ramaswami, Founder and CEO of All Hands Together Inclusive School, highlights the role of reciprocity: “It is a two-way street… offer help and share resources, and watch your network become your most powerful support system.”

Cindi Wirawan, Founder of Vibe Tribe and LinkedIn Top Voice, points to timing: “They think networking is something you do when you need something… by then, you’re already late.”

Bosco Lim, Founder of Hearted Moments Studio, frames it in terms of value: “If you focus on giving first… people naturally want to reciprocate.”

Belle Kwok, Founder of Lexine Enterprise, brings clarity to the selection process: “Real networking is about choice – who you spend time with, who you align with, and who you actually want to build with.”

Taken together, these perspectives suggest a shift away from volume and towards something more deliberate.

Not more conversations. Better ones.

From presence to alignment

The most striking observation from the event was not how many people connected, but how easily conversations progressed.

Participants were not “working the room”. They were continuing discussions, scheduling follow-ups, and exploring collaborations – often within the same interaction.

The difference was not technique.

It was alignment.

Everyone in the room shared a common objective: to grow and monetise their voice as speakers, coaches, trainers, and creators.

This shared intent removed the friction typically associated with networking. Conversations had context. Outcomes had direction. Follow-ups had a purpose.

In other words, networking became a byproduct – not the goal.

Why alignment outperforms volume

When people operate within aligned environments, several things change:

  • Filtering happens upfront: The room itself reduces noise, eliminating the need to “figure out” who is relevant.
  • Conversations gain depth faster: Shared context allows discussions to move beyond introductions almost immediately.
  • Follow-through becomes natural: When there is mutual relevance, staying in touch no longer feels forced.
  • Opportunities emerge organically: Collaborations are discovered, not chased.

This is a fundamentally different model from traditional networking – one that prioritises the quality of interaction over the quantity of contacts.

Also Read: Why networking, not online applications, now determines career success

The role of systems in modern networking

Even when alignment exists, most people struggle with consistency – remembering conversations, maintaining follow-ups, and staying relevant across multiple relationships.

This is where systems, including AI, are beginning to play a role.

Rather than replacing human interaction, they reduce the operational friction around it – supporting continuity, context, and consistency across conversations.

As AI and automation become more embedded in how we work, the advantage will not go to those who can meet the most people, but to those who can build and sustain the most relevant relationships over time.

In practice, this shifts networking from a series of isolated interactions into an ongoing relationship system.

Beyond networking: Building environments that convert

It is worth noting that the event itself was not designed as a networking platform.

It was built to bring together individuals focused on monetising their voice – people actively working towards visibility, positioning, and opportunity creation.

The networking emerged as a natural consequence.

This distinction matters.

Because it suggests that the future of networking may not lie in better tactics, but in better environments – spaces where alignment is built into the room, not left to chance.

The shift ahead

Networking is not disappearing. But it is evolving.

The emphasis is moving away from:

  • How many people do you meet towards
  • How relevant those people are

And from:

  • Starting conversations towards
  • Sustaining them

For founders, creators, and operators, the implication is clear: The most valuable networks are no longer built by indiscriminately expanding reach, but by positioning yourself within the right ecosystems.

Because when the room is right, you don’t need better networking tactics.

You need better alignment.

And when that happens, networking stops feeling like effort.

It becomes momentum.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Networking was the topic, alignment was the outcome appeared first on e27.

Posted on Leave a comment

Kickstarting your AI journey: How to avoid the million-dollar mistakes most companies make

Artificial Intelligence (AI) is no longer a futuristic concept; it’s a present-day imperative, dominating boardroom discussions and reshaping industries. Yet, for all the excitement, many organisations stumble on their AI journey. Having advised leaders from global conglomerates to agile, owner-driven firms, I’ve witnessed firsthand the common pitfalls and the pathways to genuine success. My goal is to share these global experiences through this article, helping your organisation navigate the complexities and truly benefit from AI.

Beyond the hype: What companies get wrong (and right) with AI

So, when companies declare, “We’re doing AI,” where do they most often go wrong?

The AI missteps: Where ambition meets reality

  • The “instant gratification” trap

Many executives fall into the allure of quick wins, treating AI like an “instant button” for immediate results. This often leads to hasty, and expensive, investment choices without a solid foundation. Imagine attempting to build a skyscraper without a proper blueprint – it’s a recipe for disaster. I recall one executive who privately confessed to exhausting their entire AI budget on expensive hardware before even defining the problem they were trying to solve. That’s like buying a Ferrari before you have a driver’s license, let alone a road to drive it on!

  • The missing “why”: Unclear problem formulation

Excitement over the latest AI tools, like Generative AI, is understandable. However, a common misstep is failing to clearly define the actual business problem AI is meant to solve. It’s akin to having a shiny new hammer but no nail in sight! Without a clear “why,” even the most advanced AI becomes a solution in search of a problem.

  • The scattered approach: Lacking a cohesive roadmap

I’ve observed organisations launching a flurry of independent AI initiatives without a cohesive strategy. This often results in teams competing for resources, and even if projects are approved, the overall organisational improvement can be negligible. It’s like a rowing team where everyone paddles in a different direction – lots of effort, but little forward momentum. While initial exploration through understanding the concepts and trying to imagine the context in the team, attempting to solve lab-scale problems is valuable, a well-defined organisational roadmap is crucial to be drawn in a reasonable time from the start of exploration. Otherwise, you’re just building a collection of really cool individual rooms, but no functional house.

  • The data dilemma: Overlooking data integrity

AI thrives on data. Yet, the importance of accurate, clean, and accessible data is frequently overlooked. This, in my experience, is the single most critical bottleneck. If your data isn’t robust, your AI efforts will struggle. It’s the classic “garbage in, garbage out” scenario, but with much more expensive garbage!

  • The human factor: Fear and resistance

People inherently resist any change. If it is coupled with the fear of job displacement, the resistance becomes even stronger. This situation can potentially slow down any AI initiative at the execution level, and it’s imperative to properly address this genuine concern. My message is simple: AI is inevitable. You can’t put the genie back in the bottle. Embracing AI and learning to work with it is about acquiring a new superpower, not facing a new threat.

In essence, “getting it wrong” often stems from treating AI as a magic bullet or a purely technical endeavour, rather than a strategic business transformation. It’s not just about the tech; it’s about the entire orchestra playing in harmony.

Also Read: Balancing ambition and well-being: A founder’s take on sustainable company building

The ingredients for AI success: A recipe for impact

To distil it down, successful AI initiatives typically require:

  • AI literacy at the top: Board and executive levels need a clear understanding of AI’s potential and limitations.
  • Contextual understanding: AI capabilities must be understood within the unique context of your specific organisation.
  • Foundational investment: Allocate sufficient time for building robust foundational capabilities.
  • Business value focus: Clearly define the business problem and the expected value outcomes.
  • Company-wide strategy: A cohesive, well-defined roadmap ensures alignment and efficiency.
  • Addressing human emotions: Empathy and clear communication are vital to mitigate fear and uncertainty.
  • Data sanity: Clean, reliable data is the lifeblood of effective AI.
  • Top-down commitment: AI is a strategic imperative requiring unwavering support from leadership.
  • Tolerance for failure: Expect initial setbacks; they are opportunities for learning and adaptation.

From vision to reality: Making AI deliver

Moving from an AI vision to tangible business impact requires significant organisational transformation and, sometimes, tough decisions. A true cultural shift demands strong stakeholder buy-in and, frankly, top-down enforcement. Making the organisation “AI aware” and up-skilling key executives are paramount.

Here are the critical decisions that determine whether AI creates real business impact or remains theoretical:

  • The executive sponsor: An executive sponsor with a complete understanding of the goal, approach, and unwavering commitment is absolutely key. He/she is the champion, the cheerleader, and the bulldozer, moving initiatives from the drawing board to tangible benefits.
  • Strategic sourcing: I’ve also seen organisations stumble because they made the wrong decision between in-house skill development versus outsourcing, or they ended up with the wrong implementation partner or product. These are critical choices that can make or break a project.
  • Avoiding the “lab-trap”: It’s easy for in-house teams to prove a concept in a lab environment and become complacent. However, scaling to production demands an entirely different approach, requiring robust engineering and operational expertise. A proof-of-concept is like baking a single cupcake; scaling to production is like running a bakery that churns out thousands daily.
  • Robust data infrastructure: Once again, robust data infrastructure and governance are non-negotiable. AI initiatives frequently stall because their data isn’t sanitised or is simply insufficient. It’s like trying to bake a cake while basic ingredients are missing – you’re just going to end up with a mess.

Leadership, ownership, and decision-making: The pillars of success

For AI initiatives to truly deliver results, several internal conditions must be met:

  • Visionary executive sponsorship: A strong executive sponsor must articulate a compelling vision, positioning AI as a transformative and strategic imperative. A dedicated AI or data leader, accountable for adoption and monetary impact, is also crucial. True AI adoption rarely happens without an executive actively “pushing” (emphasis is on “pushing”) from the top, not just passively monitoring.
  • Cross-functional ownership: AI implementation is inherently cross-functional. Ownership must be distributed across diverse teams – data scientists, engineers, business analysts, domain experts, legal, and compliance. Each member needs a clear understanding of their role and how their contribution fits into the larger picture. It’s a team sport, and everyone needs to know their position and strategy.
  • Data-driven culture and iteration: The organisational culture should foster data-driven decision-making, embracing rapid prototyping, testing, and iteration. This means moving away from lengthy development cycles and adopting shorter feedback loops. In the world of AI, it’s “fail fast, get up, gather yourself, use the learning and try differently”.

Also Read: AI agents could become the new OTAs — What it means for Agoda and the future of travel

Measuring what matters: Quantifying AI’s impact

When it comes to measurable results, leaders must focus on tailored metrics. I recently spoke with a CEO whose manpower costs were only five per cent of his operational costs, having recently rationalised his workforce by 30 per cent. In his context, simply discussing human productivity enhancement, while valuable, wouldn’t be the most impactful objective for his business.

So, what to measure? It depends entirely on the business problem you’re solving. It could be:

  • Revenue growth: From new AI-powered products or services.
  • Cost reduction: Through process automation or optimisation.
  • Improved customer satisfaction: Due to personalised experiences or faster service.
  • Reduced risk: Through AI-driven fraud detection or predictive maintenance.
  • Faster time-to-market: For new innovations.
  • Real-world examples: I’ve led teams implementing AI combined with physics (Digital Twin) that saw a 15 per cent yield increase in an oil rig. In another instance, quality and customer satisfaction improved, and production output increased by over 25 per cent in a process manufacturing plant.

The key is to link AI initiatives directly to strategic business objectives, define quantifiable metrics before you start, and compare them post-implementation.

Beyond efficiency: Focusing on human outcomes

Perhaps the most important question is how to ensure AI adoption genuinely improves human outcomes – for teams, customers, and society. Any technology developed by humans should ultimately enhance human comfort and well-being. Therefore, embedding ethical AI principles from the very beginning is imperative.

This includes considerations like:

  • Fairness and equitable outcomes
  • Transparency and explainability
  • Sustainability
  • Community well-being
  • Inclusion (to moderate the digital divide)

The focus should always be on employee empowerment and augmentation, rather than automation that simply replaces jobs. How can AI make our employees better, more effective, and happier? How can it serve our customers more thoughtfully? How can it contribute positively to society? These are the questions we must continually ask.

The smartest first step: Don’t boil the ocean

For senior leaders feeling both excited and overwhelmed by AI, my recommendation is clear: Do not try to create a five-year AI master plan to start with. That would become obsolete quickly, given the pace of evolution of this technology

Instead, identify and champion one or two high-impact, low-complexity AI initiatives that solve a critical business problem and can deliver measurable results within 1 to 3 months. Think of it as a pilot project, a quick win to build momentum and confidence.

Also Read: AI at work: Moving forward with employee engagement

The steps are straightforward:

  • Select a concrete, high-value business problem: What’s a genuine pain point AI could alleviate where success would be clearly visible?
  • Ensure clean data for that problem: Focus on the specific data needed, not trying to clean all your data at once.
  • Define clear, measurable business outcomes: What does success look like, specifically, for this pilot?
  • Assemble a small, dedicated, cross-functional team: Empower them by freeing them from routine work and providing necessary training.
  • Commit to success: Provide resources and remove roadblocks.
  • Achieve that first tangible success: Celebrate it! Make a big deal out of it.
  • Replicate and scale: Then, and only then, replicate what you’ve learned to other areas.

This iterative approach builds confidence, demonstrates value, and allows organisations to learn and adapt without getting bogged down in overly ambitious plans from day one. It’s about taking smart, actionable steps, not giant leaps into the unknown.

Ultimately, the companies that succeed with AI will not be the ones that move fastest, but the ones that build the right foundations and make it work in practice.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Kickstarting your AI journey: How to avoid the million-dollar mistakes most companies make appeared first on e27.

Posted on Leave a comment

How we scaled a B2B events business across 40+ countries

When my Co-Founder, Samuel Adcock, and I started The Ortus Club in Singapore in 2015, we were in our mid-twenties, pitching a fairly unsexy idea to enterprise clients who had never heard of us: that a dinner for 15 people would generate more pipeline than a conference booth in front of 5,000.

Nobody was particularly interested in hearing that from a small agency in Southeast Asia. The established event houses were in London and New York. The enterprise brands we wanted to work with — the Googles, the Visas, the Metas — were already being courted by firms with decades of track record and offices in every major market. We had a WeWork desk and a thesis.

A decade later, The Ortus Club has produced more than 2,500 invitation-only executive events across 40+ countries. Our client list includes Google, Visa, Meta, Adobe, IBM, Zendesk, and Airwallex. We operate across APAC, EMEA, and North America. And we have never run a single open-registration conference.

This is not a success story disguised as a LinkedIn post. This is what we actually learned — including the parts that were painful — about building a global B2B services company from Southeast Asia.

Specialisation was the only thing that made us credible early on

The most important decision we made was also the most uncomfortable one commercially: we said no to everything except invitation-only executive roundtables. No conferences. No exhibition stands. No sponsored speaking slots. One format, done obsessively well.

For the first couple of years, this meant turning away revenue. Prospective clients would ask us to run a panel at their conference or manage a large-format event, and we would say no. That felt reckless when we were trying to build a business. But it gave us something that turned out to be far more valuable than early revenue: a clear identity in a crowded market.

When a VP of Marketing at a Fortune 500 company asks, “Who does invitation-only executive roundtables?” — we wanted the answer to be us, immediately, without qualification. That only works if you are not also doing twelve other things.

We eventually documented our entire methodology in a free guide called The Art of Networking. It remains the most practical thing we have published, and it is the backbone of our company trainings.

Also Read: Strategic investment 101: A founder’s playbook for winning without losing control

Southeast Asia was not a disadvantage — it was a training ground

There is a specific advantage to building a global B2B business from this region that does not get discussed enough in founder narratives. Southeast Asia is the most culturally diverse business environment on the planet. Running events across Singapore, Manila, Jakarta, Kuala Lumpur, Bangkok, and Sydney in the same quarter forces a localisation discipline that companies built in more homogenous markets simply do not develop.

Our team is largely based in Manila and Singapore. The operational muscle we built, adapting tone, format, guest curation approach, and follow-up cadence across six or seven dramatically different cultures in APAC, gave us something we did not fully appreciate until we expanded into Europe and North America: we were already better at localisation than most of our competitors because we had been doing it since day one.

The cultural nuances are not trivial. The formality expected in a Tokyo executive dinner is fundamentally different from what works in Sydney. The way you position an invitation to a CTO in Singapore is not the way you position the same invitation in Jakarta. Getting this wrong does not just reduce attendance — it damages the client’s brand with exactly the people they are trying to reach. We learned this the hard way more than once in our early APAC expansion, and those lessons became the foundation for everything we built afterwards.

Scaling into EMEA and the US meant rebuilding, not copy-pasting

The move into London was our first real test of whether the model could travel outside APAC. The answer was yes — but not without significant adaptation.

London’s senior executive community operates differently from Singapore’s. There is more scepticism toward event invitations generally, longer relationship-building timelines, and a much higher premium on credibility signals in the invitation itself. Who else is attending, who is hosting, what is the venue — these details carry more weight in EMEA than in APAC, where the topic and format tend to do more of the heavy lifting.

Also Read: How founders should build for a Meta-national suture

The US market presented a different challenge entirely. American executives reward directness and clear commercial framing in ways that would feel abrupt in most of Asia. The post-event follow-up expectations are faster and more transactional. And the sheer volume of competing events in markets like New York, San Francisco, and Chicago means your invitation is fighting for calendar space against a much larger field.

The constant across every market — the one thing that has not changed in a decade — is the core thesis: get the right people in a room, design a conversation that creates genuine value for every person present, and the commercial outcomes follow. That has been as true in Zurich and San Francisco as it has been in Singapore and Sydney.

Delegate acquisition is the real challenge

If I had to identify the single most underrated capability in B2B event marketing, it is delegate acquisition — the process of actually getting senior executives to say yes to your invitation and show up.

Anyone can book a nice venue. Anyone can write a compelling agenda. The part that separates event companies that deliver genuine commercial value from those that do not is whether the right people are actually in the room. A beautifully produced roundtable with the wrong 15 people is worthless. An average venue with the right 15 people is transformational.

Our entire operational model is built around this. We do not sell sponsorship packages and hope people register. We identify the specific executives our client needs in the room, and then we do the work — the research, the outreach, the personalisation, the follow-up — to get them there. That process is manual, labour-intensive, and does not scale elegantly. It is also the reason our clients keep coming back.

The 2026 Event Marketer’s Playbook — our annual research publication based on data from 295 senior B2B marketers across 29 roundtables in 30 cities — confirmed what we have seen operationally for years: the cost-per-qualified-conversation for curated invitation-only events is significantly lower than for open-attendance formats when you weight for pipeline quality. The per-event production cost is higher, but the outcome is not comparable.

Also Read: The alliance economy: How founders and investors should position in a fragmented world

What I would tell a founder building a services business in Southeast Asia

First, specialise earlier and more aggressively than feels comfortable. The temptation to be a generalist is strongest when revenue is scarce, which is exactly when saying no matters most.

Second, treat the cultural complexity of this region as a competitive advantage, not a logistical headache. If you can operate effectively across APAC, you are better prepared for global expansion than you realise.

Third, document your methodology and publish it for free. The Art of Networking has been our single most effective resource — not because it generates leads directly, but because it establishes the core mission of what the company stands for. And that’s contagious.

And fourth, invest disproportionately in the part of your service that is hardest to replicate. For us, that is the guesting process. What goes on behind the scenes? For your business, it will be something else. But the principle is the same: the thing your competitors find most difficult to copy is the thing your clients will value most.

We are a decade in now. The thesis has not changed. The rooms have just gotten bigger — and more global.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected. 

The post How we scaled a B2B events business across 40+ countries appeared first on e27.

Posted on Leave a comment

Inside the next phase of AI-driven banking in Southeast Asia

Across Southeast Asia, banks and financial institutions are entering a new phase of digital transformation. Customers increasingly expect financial services to be available instantly, whether they are checking balances, resolving account issues, applying for loans, or interacting with support teams across mobile apps and messaging platforms.

Southeast Asia’s financial sector is also expanding rapidly as digital adoption accelerates across the region. According to Google, Temasek and Bain & Company’s e-Conomy SEA 2023 report, the region’s digital economy is projected to reach US$600 billion in gross merchandise value by 2030, with financial services and digital payments playing a central role in that growth.

Artificial intelligence is emerging as one of the key technologies helping banks navigate this transition. McKinsey estimates that AI could generate up to US$1 trillion in additional value annually for the global banking industry, driven by improvements in customer engagement, fraud detection, and operational efficiency.

In particular, conversational AI is gaining traction as institutions look to automate routine customer interactions, support digital onboarding processes, and provide real-time assistance across voice, chat, and video environments.

The rise of conversational banking

Conversational AI has rapidly become a central component of modern digital banking strategies. Deloitte notes that the majority of customer interactions with banks now occur through digital channels such as mobile apps and messaging platforms, increasing the need for scalable automated support systems.

Traditionally, banks relied heavily on call centres and human agents to handle customer enquiries. While effective, these systems often struggled to keep up with growing volumes of customer interactions, particularly as mobile banking adoption surged across Southeast Asia.

AI-powered conversational systems are now helping financial institutions automate many of these routine tasks. Virtual assistants can respond to frequently asked questions, guide users through account services, and provide real-time support through messaging platforms and mobile applications.

In many cases, these systems operate in hybrid environments where AI handles initial interactions while human agents step in for complex issues. This allows banks to improve response times while ensuring customers still receive personalised assistance when needed.

For financial institutions operating in multilingual markets like Southeast Asia, conversational AI also helps scale customer engagement across languages and regions while maintaining consistent service quality.

Also Read: A new era of automation: Establishing best practices for intelligent automation and generative AI

A partnership targeting Southeast Asia’s financial sector

Across Southeast Asia, banks and financial institutions are increasingly exploring conversational AI to improve customer engagement and operational efficiency.

As demand grows for faster, more responsive digital banking experiences, technology providers and system integrators are forming partnerships to help financial institutions deploy AI-driven interaction systems at scale.

Similar collaborations are emerging across the global financial technology ecosystem. Banking software provider Temenos, for example, has partnered with Microsoft to integrate AI capabilities into digital banking platforms, enabling financial institutions to automate customer engagement and improve operational efficiency.

Another example in Southeast Asia is a collaboration between real-time engagement technology provider Agora and Vietnam-based IT services and digital transformation company FPT, aimed at accelerating conversational AI adoption among banks and financial institutions across the region. By combining Agora’s real-time engagement and conversational AI capabilities with FPT’s enterprise integration expertise, the collaboration supports digital banking interactions across voice, chat, and video channels, enabling workflows such as customer support, payment enquiries, lending interactions, insurance onboarding, and multilingual customer engagement across regional markets.

Real-world deployments across the banking sector

Enterprise adoption of conversational AI within financial services is already gaining momentum.

In Singapore, DBS Bank has deployed AI-powered virtual assistants across its digital channels to handle routine customer enquiries, helping reduce response times while allowing human agents to focus on more complex financial services. OCBC Bank has taken a similar approach with its AI-powered chatbot “Emma”, which assists customers with home loan and banking enquiries through digital platforms.

In Vietnam, Sacombank has implemented AI voice agents as part of a next-generation AI contact centre initiative. The deployment increased call handling capacity by more than 58 per cent and allows the system to manage up to 41,000 calls per day, improving service responsiveness while enhancing overall customer experience. 

Similarly, Vietcombank uses Intelligent Virtual Assistant VCB Digibot across messaging channels to answer common customer enquiries related to loans, cards, interest rates, promotions, and currency exchange information. By automating routine requests, bank staff can focus more on complex customer needs and advisory services. 

Another example comes from Home Credit Vietnam, which uses AI voice agents to automate large volumes of call centre interactions each month while maintaining consistent service quality across its customer operations. 

These deployments illustrate how conversational AI can improve operational efficiency while also helping financial institutions handle rapidly growing interaction volumes.

Also Read: Why the AI revolution depends on reinventing energy infrastructure

Balancing innovation with trust and compliance

While AI-driven automation offers clear efficiency benefits, financial institutions must also navigate increasingly complex regulatory environments.

Across Southeast Asia, banking and financial services organisations operate under strict frameworks governing data protection, electronic systems, and consumer safeguards. Any new digital infrastructure must therefore meet rigorous standards for security, privacy, and operational resilience.

Solutions built for the sector must be designed to operate within these regulatory boundaries while still delivering real-time engagement capabilities. For example, Singapore’s Monetary Authority of Singapore (MAS) has introduced technology risk management guidelines that require financial institutions to ensure robust cybersecurity, system resilience, and responsible use of emerging technologies when deploying digital services.

These frameworks highlight the need for AI-powered banking solutions to balance innovation with strong governance, ensuring that automation improves customer experience without compromising regulatory compliance.

The next phase of digital banking in Southeast Asia

Looking ahead, conversational AI is likely to play a growing role as financial institutions across Southeast Asia modernise their digital infrastructure.

Financial institutions are also accelerating the use of artificial intelligence. According to McKinsey’s State of AI report, financial services is among the industries seeing the fastest growth in AI adoption.

Across the region, this shift is becoming visible in how banks manage customer interactions at scale. In Thailand, for example, Kasikornbank has expanded the use of AI across its digital banking services to support automated customer support and personalised recommendations within its mobile banking ecosystem.

Deploying conversational AI in financial services, however, requires more than new software. Banks must integrate real-time communication infrastructure, enterprise AI platforms, and secure data systems while operating within strict regulatory frameworks. As a result, partnerships between AI platform providers, real-time engagement infrastructure companies, and enterprise technology integrators are becoming increasingly important. These collaborations help bridge the gap between emerging AI capabilities and the operational realities of large financial institutions.

For banks facing rising customer expectations and growing operational complexity, the ability to deliver secure, intelligent, and responsive real-time interactions may become a defining factor in the next phase of Southeast Asia’s banking transformation.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Inside the next phase of AI-driven banking in Southeast Asia appeared first on e27.

Posted on Leave a comment

What NVIDIA GTC 2026 reveals about the future of embodied AI

Artificial intelligence is stepping off the screen.

At NVIDIA GTC 2026, one theme stood out across announcements, demos, and developer activity: AI is moving beyond cloud-based software into robots, edge devices, and real-world environments.

This shift is driving the rise of embodied AI. And increasingly, these systems are being designed with one interface in mind: voice.

From models to machines

For the past few years, AI innovation has largely centred around models, with stronger reasoning and multimodal capabilities. But GTC 2026 signals a transition from models to machines.

Instead of asking what AI can generate, developers are now exploring what AI can do in real-world environments. This includes AI systems embedded directly into physical devices and environments.

This shift is enabled by the convergence of several layers:

  • Edge AI computing platforms like NVIDIA Jetson
  • Multimodal models capable of processing vision, audio, and text
  • Real-time infrastructure for interaction and response
  • Accessible hardware platforms for rapid prototyping

Together, these layers are turning AI from a passive tool into an active system. This shift is not just theoretical. It is already shaping how developers build and experiment with AI systems today.

The rise of developer-first robotics

One of the most notable ways this shift is materialising is through the emergence of developer-first robotics platforms.

These systems are not built solely for industrial deployment. Instead, they are designed to be programmable and modular, allowing developers to prototype embodied AI applications more easily.

NVIDIA’s Isaac platform continues to play a central role here, offering simulation and development tools that allow teams to train and test robotics systems before deploying them in the real world. Jetson-powered kits are also becoming a standard foundation for edge AI and robotics experimentation.

Alongside these, newer platforms are lowering the barrier to entry even further. Reachy Mini, an open-source humanoid robot developed by Pollen Robotics in collaboration with Hugging Face and integrated with Seeed Studio’s hardware ecosystem, is one such platform gaining attention.

Unlike traditional robotics systems, Reachy Mini is designed for interaction. It combines expressive movement, modular hardware, and compatibility with modern AI models, making it easier for developers to build embodied AI agents that can engage with humans.

Why Reachy Mini stands out

What makes Reachy Mini particularly relevant in the current wave of embodied AI is its focus on real-time, human-like interaction.

Reachy Mini

While many robotics platforms are still centred on automation or industrial tasks, Reachy Mini is designed for developers building interactive AI systems. This distinction has made it increasingly visible across GTC 2026 and its surrounding ecosystem events, where it was also highlighted during NVIDIA CEO Jensen Huang’s keynote.

Developers are using Reachy Mini alongside:

  • NVIDIA Jetson Orin Nano for edge AI computing
  • Multimodal models from platforms like Hugging Face
  • Speech and voice technologies for natural interaction

This combination enables a new class of applications where robots are not just executing predefined workflows, but continuously engaging with users in real time.

Instead of fixed tasks, these systems can:

  • Understand spoken input and intent
  • Process context using multimodal models
  • Respond instantly through voice, movement, or gestures

This reflects a shift in how robotics is designed, from task-based automation to adaptive, real-time interaction. In that sense, Reachy Mini is not just another robotics platform. It reflects a broader move toward developer-first, interaction-driven AI systems built for real-world environments.

Voice as the default interface

As AI moves into physical environments, traditional interfaces become limiting. You cannot rely on screens or keyboards in many real-world scenarios. Interaction needs to be immediate and hands-free.

This is where voice becomes critical.

At GTC, multiple demos and ecosystem collaborations highlight how voice is evolving from a feature into a core interface layer. In systems built on real-time conversational AI infrastructure, voice is not just used for commands, but for full real-time interaction.

Across emerging systems, several capabilities are becoming standard with capabilities such as:

  • Far-field audio capture for hands-free interaction
  • Speaker recognition for personalised responses
  • Wake-word activation for always-on systems
  • Real-time speech-to-speech interaction that feels conversational

In robotics setups such as Reachy Mini, this allows users to interact with machines more naturally, without needing structured prompts or predefined commands.

The result is a shift in how humans engage with AI. Instead of typing instructions or navigating interfaces, users can speak, listen, and interact in a way that mirrors human conversation.

As these systems become more reliable and widely deployed, voice is likely to become the primary way users interact with embodied AI.

Beyond robots: The expansion of voice-native devices

The implications of embodied AI extend far beyond humanoid robots.

At NVIDIA GTC 2026, there is a clear push toward voice-native edge devices powered by compact hardware and real-time AI pipelines. Instead of relying on cloud-only systems, developers are increasingly building AI that can operate directly on devices while maintaining real-time responsiveness.

One example comes from collaborations between companies like Agora and Seeed Studio, which are building voice-native edge systems that combine hardware, AI models, and real-time infrastructure.

Microphone array platforms such as Seeed Studio’s reSpeaker, powered by AI voice processors, are designed to capture voice input reliably even in noisy environments. When paired with edge AI computing and conversational AI engines, these systems can:

  • Capture voice input through far-field microphones
  • Process speech and reasoning in real time
  • Deliver responses with ultra-low latency

What makes this architecture notable is the continuous interaction loop it enables. Audio is captured on-device, transmitted through real-time networks, processed by AI systems for understanding and response, and streamed back almost instantly.

This creates a more seamless, always-on experience compared to traditional voice assistants. As a result, developers are starting to build voice-native systems across a wide range of applications:

  • Smart home devices that respond contextually to users
  • Conferencing systems with real-time transcription and interaction
  • AI assistants embedded directly into hardware
  • Robotics interfaces that enable natural human-machine communication
  • Industrial IoT systems that can be controlled and monitored through voice

The next interface for AI

If the past decade of AI was defined by screens and text, the next decade will be defined by interaction in the physical world. Voice is emerging as the interface that enables AI to operate seamlessly across environments.

What GTC 2026 makes clear is that embodied AI is no longer a distant concept. It is becoming a practical reality, shaped by advances in robotics, edge computing, and real-time interaction.

We are already seeing early signals from companies actively building in this space.

Figure AI is developing humanoid robots designed for real-world work environments, while 1X is focused on safe, human-centric robots for the home.

Tesla continues to push its Optimus robot as part of a broader vision of AI-powered automation, and Boston Dynamics is advancing mobility and autonomy in robotics through systems like Spot and Atlas.

Tesla Pushes Forward on Optimus Production, Musk Calls It “the Biggest – Hansshow

At the same time, Hugging Face is also playing a growing role by expanding open-source models into robotics, making it easier to combine perception, language, and action.

On the interface layer, companies such as Amazon and Google are evolving voice assistants beyond smart speakers into more context-aware, multimodal systems embedded across devices.

What connects these efforts is a shared direction: AI is becoming embodied, interactive, and continuously present.

FFIn the near future, interacting with AI may feel less like prompting a system and more like interacting with systems that can listen, respond, and act in real time. For builders and startups, the question is no longer whether this shift will happen. It is how quickly they adapt.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post What NVIDIA GTC 2026 reveals about the future of embodied AI appeared first on e27.

Posted on Leave a comment

Beyond the US$70K level: Why Bitcoin’s real test isn’t price yet

Bitcoin’s ability to hold above US$70K while ETF outflows cooled provided the essential foundation. The Fear and Greed Index resting at a neutral 45 signalled neither panic nor euphoria, conditions that often precede sharp reversals. This equilibrium allowed capital to rotate with confidence into broader crypto assets without the spectre of a Bitcoin-led collapse hanging over traders. I see this stability as evidence that the market now prices in institutional participation without becoming enslaved to it. Bitcoin steadies, and the ecosystem breathes.

Bitcoin’s resilience functioned as more than a price level. It served as a psychological anchor for a market still learning to decouple from traditional finance while remaining tethered to macroeconomic currents. When Bitcoin steadies above critical support, it creates space for experimentation and risk-taking elsewhere in the ecosystem. The fact that this stability occurred amid ongoing ETF flow volatility demonstrates that institutional participation, while influential, no longer dictates every intraday move.

Retail and sophisticated derivatives traders alike interpreted Bitcoin’s strength as a green light to explore opportunities beyond the largest-cap assets. This dynamic underscores a healthy evolution where Bitcoin serves as digital gold and market bellwether without stifling innovation in adjacent protocols and tokens.

The rally’s amplification came from two interconnected forces. First, speculative capital chased explosive moves in low-capitalisation tokens. Alaya Governance Token surged 94.5 per cent while RaveDAO climbed 235.4 per cent , gains fuelled by derivatives activity and social media momentum. These moves reflect a familiar pattern where risk appetite returns, capital seeks asymmetric opportunities, and narratives form around emerging projects.

Also Read: Bitcoin’s US$70K rejection was no accident: What the charts say about tonight’s Iran decision

Second, and equally important, crypto maintained a 92 per cent correlation with the Nasdaq-100 ETF, QQQ. This tight linkage means digital assets continue to ride the same macro waves as technology equities, particularly sensitivity to interest rate expectations and liquidity conditions.

On April 10, 2026, US markets extended gains with the S&P 500 rising 0.62 per cent to 6,824.66, the Nasdaq Composite advancing 0.83 per cent to 22,822.42, and the Dow Jones Industrial Average adding 0.58 per cent to close at 48,185.80. The VIX volatility index fell 7.37 per cent to 19.49, signalling reduced anxiety among equity traders. Crypto’s participation in this broader risk-on move was not coincidental but structural.

This correlation cuts both ways. When macro sentiment improves, as it did on hopes of geopolitical de-escalation in the Middle East and steady labour market data, crypto benefits from the same liquidity flows that lift technology stocks. This linkage also means crypto remains vulnerable to shifts in Federal Reserve policy or unexpected economic data. The projected advance in CPI inflation data looms as a potential catalyst for volatility.

Commodity markets reflected similar crosscurrents, with US crude settling near US$98 per barrel amid hopes of a de-escalation, while Brent crude held at US$96.71. Gold rose to US$4,790.90 per ounce as a hedge against uncertainty, and the US Dollar Index slipped 0.51 per cent to 99.13, providing modest tailwinds for risk assets, including crypto. For those of us who believe in the long-term promise of decentralised systems, this macro tether represents both a reality of the current transition period and a reminder that true independence for digital assets requires deeper structural decoupling.

Also Read: Bitcoin holds US$71K as Ethereum surges 15%: What’s driving the US$2.44T crypto rally

The market faces a clear inflexion point. Technically, the total crypto market capitalisation confronts resistance at the 23.6 per cent Fibonacci retracement level of US$2.49T. The seven-day Relative Strength Index reading of 80.72 suggests short-term overbought conditions that often precede consolidation or pullbacks. Bitcoin’s ability to hold above US$70K remains the primary support for the broader complex. A sustained break above US$72K could reignite bullish momentum across altcoins. A failure to hold US$70K might trigger a retreat toward the US$2.39T support zone.

Beyond price levels, regulatory developments warrant close attention. The SEC’s CLARITY Act roundtable scheduled for April 16 could provide clarity or confusion depending on the tone and substance of discussions. From my perspective, having engaged with policymakers on blockchain frameworks, I view regulatory progress as essential for sustainable growth, but I remain sceptical of approaches that prioritise control over innovation.

The current market posture warrants cautious optimism. Bitcoin’s foundational strength, combined with speculative enthusiasm in altcoins, creates a constructive backdrop. The confluence of technical resistance, overbought signals, and macro uncertainty demands discipline. For investors and builders alike, this environment rewards selectivity.

Projects with genuine utility, transparent tokenomics, and active communities are better positioned to withstand volatility than those riding pure speculation. The 92 per cent correlation with tech equities reminds us that crypto does not operate in a vacuum. Liquidity conditions, rate expectations, and geopolitical developments will continue to influence price action in the near term. The longer arc points toward gradual decoupling as digital asset infrastructure matures and use cases expand beyond financial speculation.

Mainstream narratives often oversimplify crypto market moves as mere risk-on or risk-off plays. The reality proves more nuanced. Bitcoin’s resilience above US$70K despite ETF outflows suggests underlying demand that transcends short-term flow data. The explosive moves in tokens like RaveDAO reflect the enduring appeal of asymmetric opportunities in emerging ecosystems.

These gains occur within a macro framework that remains rate-sensitive. This duality defines the current moment. Traders must navigate technical levels and sentiment indicators while keeping one eye on Federal Reserve communications and geopolitical developments. Builders must focus on creating real value that can sustain projects beyond the next market cycle.

Also Read: Bitcoin and Ethereum officially commodities: How the 91% S&P correlation signals a new era

The path forward likely hinges on whether Bitcoin can convert its current stability into decisive upward momentum. A break above US$72K with conviction could propel the total market cap toward the US$2.49T resistance. Success at that level would signal a shift from cautious accumulation to broader participation.

Failure to clear these hurdles might see capital rotate back into Bitcoin as a relatively safe haven within crypto or into traditional assets if macro headwinds intensify. ETF flow data will remain a crucial gauge of institutional sentiment, particularly after a rally that has pushed short-term indicators into overbought territory. Like I said yesterday, the April 16 regulatory roundtable could serve as a catalyst if it produces constructive dialogue, or as a source of volatility if expectations diverge sharply from outcomes.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Beyond the US$70K level: Why Bitcoin’s real test isn’t price yet appeared first on e27.

Posted on Leave a comment

Data minimisation vs AI context maximisation: The battle defining the future of smart systems

AI product teams are under constant pressure to make systems more accurate, more personalised, and more “helpful.” The simplest path is obvious: give the model more context. Ingest more documents. Retain more history. Build long-term memory. Expand what the assistant can see, and performance usually improves.

But privacy regimes and privacy expectations push in the opposite direction. Data minimisation, purpose limitation, and collection restriction are not abstract ideals. They are the principal regulators that customers rely on to keep data usage bounded and accountable.

This creates a direct design conflict: the incentives that make AI feel smarter are often the same incentives that make privacy controls weaker.

The right question isn’t “which side wins.” It’s how to build AI systems that improve without defaulting to maximal collection.

Why is this tension structural, not philosophical

In traditional software, minimisation is easier to align with product goals. You collect the fields you need for a feature, you store them for a defined purpose, and you can often explain why each piece of data exists.

AI is different because value comes from correlation and context. Models are better when they can connect fragments across time, across systems, and across interactions. Personalisation improves when the system remembers. Retrieval improves when the corpus is large. Assistance improves when the model sees the full picture.

Teams begin with a narrow scope, then expand it for quality. A support copilot starts with ticket history, then wants CRM data, then wants billing context, then wants internal notes. A productivity assistant starts with documents, then wants email, then wants a calendar, then wants chat logs. Each step can be justified as “improving user experience.”

Individually, these expansions look reasonable. Collectively, they turn an assistant into an always-on observer.

Also Read: Balancing ambition and well-being: A founder’s take on sustainable company building

Data minimisation is not anti-AI; it is pro-boundaries

Minimisation is often misunderstood as “collect less, at any cost.” In practice, it is a boundary principle. It forces organisations to answer three questions clearly.

  • What data is required for this feature?
  • What purpose does it serve?
  • How long do we need it?

AI teams struggle with these questions because the benefits of extra data are often real, but diffuse. More history can improve outcomes in unpredictable ways. More context can reduce edge case failures. More ingestion can make answers more complete.

But that uncertainty is exactly why minimisation matters. If you cannot clearly define why you need a dataset, you are not making a product decision. You are building optionality at the expense of privacy.

How “context maximisation” quietly expands risk?

The privacy risk is not only about what you store. It is also about what you expose and how broadly it can be inferred.

When AI systems ingest broad corpora, they create new pathways for leakage. Users can receive summaries that reveal sensitive details they were never shown directly. Assistants can surface internal information through conversational queries. Models can retain fragments of sensitive text in ways that are hard to reason about operationally.

Long-term memory features introduce a different category of risk: the system remembers things users did not intend to persist, and those memories can resurface out of context. Even when memory is user-facing and configurable, it changes the default posture from “ephemeral interaction” to “persistent profile.”

There is also a governance risk. The more systems you connect, the harder it becomes to explain data flows. When a user asks, “Where did the assistant get that?” the answer needs to be more than “It had access.”

Performance metrics reward collection

This tension becomes sharper because performance is measurable and privacy degradation is often invisible until it is not.

AI teams can track accuracy, resolution time, customer satisfaction, deflection, and engagement. They can show improvements when they add more context. Those wins are immediate and quantifiable.

Also Read: AI agents could become the new OTAs — What it means for Agoda and the future of travel

Privacy risks are delayed and probabilistic. They appear as edge incidents, customer discomfort, regulatory scrutiny, or an erosion of trust that is hard to attribute to one design choice. This leads to a predictable outcome: teams optimise what they can measure.

If you want minimisation to hold, you have to make privacy constraints visible and product-relevant, not just a review step at the end.

Reframing the problem as “context precision”

The practical way forward is to shift from context maximisation to context precision.

Context precision means the system gets the right context for the task, not all context that exists. It treats data access as a targeted operation, not a broad entitlement.

This starts with task-based scoping. What does the assistant need to do right now? Draft a reply. Summarise a document. Recommend next steps. Each task has a minimum viable context. Build around that minimum first, then expand only with explicit justification.

It also requires separating retrieval from retention. Many systems conflate “the model needs access” with “we should store it.” In reality, the assistant can fetch context when needed without permanently retaining it. Not every useful piece of data needs to become part of a long-term memory layer.

Design patterns that reduce conflict

A few patterns consistently help reconcile performance with privacy.

Make context opt-in and visible. If the assistant is going to use email history or calendar content, make that a clear user decision, not an implied default. Users tolerate data use better when it is transparent and controllable.

Use short-lived, purpose-bound context windows. Instead of giving the assistant broad, continuous access, provide time-bounded slices aligned to the task. This improves relevance while limiting exposure.

Prefer selective retrieval over bulk ingestion. Build retrieval mechanisms that pull only what is needed, rather than indexing everything “just in case.” This reduces both the attack surface and the risk of accidental cross-context leakage.

Separate sensitive classes of data into stricter zones. Some data can be used for convenience features with minimal risk. Other data should require higher assurance and tighter policies. Treat “what the assistant can see” as a tiered model, not a single permission.

Treat memory as a product contract. If you introduce long-term memory, define what can be remembered, how it is edited, how it expires, and how users can inspect it. Memory without clear controls becomes a persistent privacy liability.

Build “privacy cost” into AI evaluation. If a model improves with more context, measure the tradeoff explicitly. The question becomes: what incremental performance did we gain, and what additional data exposure did we introduce? When teams are forced to articulate that exchange, minimisation stops being abstract.

Also Read: Why startups fail at offshore expansion (and how to fix it)

Purpose limitation is the hardest line to hold

Purpose limitation is where most AI systems struggle. Data collected for one purpose becomes attractive for another.

A dataset gathered to improve support responses becomes a training corpus. Logs captured for debugging become long-term analytics. Conversations intended to be ephemeral become personal memory.

The danger is not malice. It is reused for convenience.

The only reliable defence is governance that is enforceable in architecture, not just policy. If the system cannot technically access data outside a purpose boundary, the boundary holds. If it can, the boundary will eventually erode.

The most practical path is not extreme minimisation or extreme maximisation. It is precision: giving AI the context it needs for a specific task, for a defined purpose, for a bounded period, with user-visible control and auditable data flows.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Data minimisation vs AI context maximisation: The battle defining the future of smart systems appeared first on e27.

Posted on Leave a comment

The hidden risk in AI adoption: Unchecked agent privileges

The deepest argument in “The AI Agent Governance Gap” report by US-based API management company Gravitee is not really about AI hype, or even security budgets. It is about identity.

More precisely, it is about the fact that most enterprises still do not treat AI agents as independent digital actors within their security model, even though those agents can read, write, trigger, and transact across core systems.

That omission sounds technical. It is actually foundational. The report says fewer than 22 per cent of enterprises treat AI agents as first-class security identities. It also says 60 per cent still rely on legacy authentication patterns designed for human workflows, including session management and password-based approaches that make little sense for autonomous software. Add in the finding that 86 per cent do not enforce access policies for AI identities at all, and the result looks less like a governance gap and more like a missing layer in the architecture.

Also Read: AI agents are already inside your systems, but who’s controlling them?

For Southeast Asia’s enterprises, this should be a flashing red light. The region is building increasingly API-heavy businesses: digital banks, super apps, regional e-commerce platforms, supply-chain networks, healthtech systems, and public digital services. AI agents are being introduced into precisely these environments because they can switch between tools quickly. But that also means they can quickly accumulate privileges, often by inheriting credentials from the applications or service accounts around them.

Borrowed badges are not good enough

Most enterprises are still comfortable with two main identity categories: humans and machine accounts. Human accounts belong to employees. Machine accounts belong to applications or services. AI agents do not fit neatly into either box.

An AI agent is not merely an application process. It may take natural-language instructions, decide which tools to call, reason across multiple steps, escalate or delegate subtasks, and adapt its behaviour to context. Giving that kind of entity a generic service account is like issuing a blank company pass to a visitor and hoping common sense does the rest.

That is the structural weakness Gravitee is highlighting. If an agent borrows the identity of its parent system, security teams cannot easily distinguish what the system did from what the agent did. They cannot apply a tailored policy. They cannot limit access cleanly by task or time window. They cannot generate a clean forensic record if something goes wrong.

In Southeast Asia, this problem is magnified by enterprise sprawl. Large regional companies often operate shared services across several countries, with integrations built over the years by different teams and vendors. Service accounts are already hard to track. When AI agents start riding on top of those accounts, visibility degrades further.

Why token scope suddenly matters a great deal

The report points towards a more modern security approach: structured provisioning, scope-limited authorisation, contextual decision-making, continuous monitoring, and audit trails that survive forensic scrutiny. In practical terms, that means every agent should have a clearly defined owner, a lifecycle, a limited set of authorised resources and a way to prove why it was allowed to act.

This is where standards and policy models start to matter. Gravitee references OAuth 2.1, resource indicators from RFC 8707 and fine-grained authorisation models such as attribute-based access control and relationship-based access control. Stripped of jargon, the idea is straightforward: a token issued to an agent should be narrowly scoped to the exact resources and operations it needs, for the shortest practical duration, with policy checks happening at runtime.

That matters because agents are not static users. They are dynamic callers. A finance agent may need read-only access to invoices but no permission to approve payment. A support agent may retrieve customer history, but should not be able to alter refund rules. A procurement agent may query supplier data in one jurisdiction but not exfiltrate it into another system or region.

Without those boundaries, enterprises are effectively granting AI agents the corporate equivalent of all-area backstage passes.

Southeast Asia’s API economy makes this urgent

This identity issue is not a niche concern for security architects. It sits directly in the path of Southeast Asia’s digital economy. The region’s leading companies are heavily API-driven, and many are building around orchestration rather than monolithic software stacks. Payments talk to fraud systems. Commerce platforms talk to logistics providers. Internal dashboards talk to data pipelines. Customer service tools talk to CRMs and knowledge bases.

Also Read: It’s not the chatbot but the access: Why AI agents are the real threat

AI agents thrive in these environments because APIs are precisely how they take action. The more connected the business, the more useful agents become. But usefulness without identity discipline is a recipe for hidden privilege.

This should concern sectors beyond pure tech. Banks deploying internal AI assistants, hospitals experimenting with clinical workflow tools, manufacturers using autonomous planning systems and public agencies digitising citizen services all face the same core question: is the agent acting under its own identity, or is it effectively piggybacking on somebody else’s authority?

If the answer is the latter, governance will always be weaker than leadership assumes.

Discovery is becoming the first security control

One telling detail in the report is where CISOs say they would invest if money were not a constraint. Some 73 per cent prioritised API and workload identity discovery and inventory, while 68 per cent focused on continuous monitoring and posture analytics. That is revealing. Security leaders are not asking for shinier dashboards because they are bored. They are asking because they do not know what identities already exist in their environments.

This is a particularly relevant issue in Southeast Asia, where outsourced development, cloud migration and rapid business expansion often leave identity estates fragmented. Companies may have one set of rules for workforce access, another for developer access, a different one for legacy applications and almost none for non-human agents. That fragmentation is manageable until AI agents start hopping between layers.

At that point, identity inventory becomes the prerequisite for everything else. If an organisation cannot enumerate its AI agents, trace their permissions and map their ownership, then access policy is theatre.

The next generation of IAM will be judged by how it handles agents

Identity and access management vendors often talk about zero trust, least privilege and continuous verification. AI agents are the stress test for whether those ideas can survive contact with real enterprise automation.

The hard truth is that many current IAM implementations were not built for autonomous actors that generate tool calls, request tokens, move across contexts and perform chained operations at machine speed. That does not mean enterprises must rip everything out. It means they need to extend identity thinking beyond employees and servers.

For Southeast Asian organisations, the prize for getting this right is significant. Companies that can issue scoped, observable, revocable identities to AI agents will be able to automate more confidently across borders, business units and regulated workflows. Those that cannot will remain trapped in a cycle of cautious pilots, brittle integrations and periodic security panic.

The enterprise AI debate often fixates on model performance. But the bigger competitive question may be simpler: can your organisation tell who the agent is, what it is allowed to do and why it was allowed to do it?
If not, the system is not truly governed. It is merely busy.

The post The hidden risk in AI adoption: Unchecked agent privileges appeared first on e27.