Posted on Leave a comment

The architect’s mandate: Building a resilient foundation for the intelligent enterprise

Discover why relying on MS Office plugins for AI Agent deployment creates technical debt. Explore a modern enterprise management system’s perspective on building a robust, version-agnostic AI strategy for the modern enterprise.

In the current era of rapid technological evolution, the “Intelligent Enterprise” is no longer a visionary concept but a baseline requirement for global competitiveness. Central to this transformation is the deployment of the AI Agent—autonomous entities capable of orchestrating complex business processes across disparate systems. However, as organizations rush to integrate these capabilities, a significant strategic error is emerging: the attempt to tether agentic AI to legacy productivity frameworks via MS Office plugins. From a systems architecture perspective, a truly scalable AI Agent strategy must prioritize data gravity and process integrity over the superficial convenience of a sidebar in MS Word or MS Excel. To achieve sustainable digital transformation, leadership must look beyond the desktop and toward a unified, cloud-native intelligence layer.

The fragmented ecosystem: Navigating the versioning trap of MS Office

For decades, the developer community has recognized a fundamental truth: developing and maintaining MS Office plugins is an exercise in managing chaos. Unlike modern, unified cloud platforms, the Office 365 ecosystem remains plagued by extreme fragmentation. While MS Copilot promises a glimpse into an integrated future, the reality on the ground is a patchwork of web-based, “New Outlook,” and legacy desktop installations. This “versioning hell” creates a fragile environment for AI Agent deployment. When business logic is embedded within a plugin, it becomes hostage to the local environment of the user. For an enterprise seeking to harmonize global operations, relying on a medium where a significant portion of the user base still operates on end-of-life legacy versions is not just a technical risk—it is a breach of operational excellence.

Also read: Why traditional SEO is dying in Singapore — and how AISEO pioneers are winning the next Blue Ocean

The tender paradox: Why rigid requirements drive out competence

A disturbing trend has emerged in the procurement phase of AI transformation: the “Universal Support” mandate. We frequently observe layman buyers issuing tender invitations that require vendors to guarantee plugin compatibility across every iteration of MS Office and Office 365 currently in use. This requirement acts as a filter for quality, but in reverse. A competent, high-maturity vendor understands the exponential cost and technical impossibility of maintaining stable AI Agent behavior across decades-old COM or VSTO architectures and modern JavaScript APIs. Consequently, the most capable partners often withdraw from the bidding process. This leaves the enterprise to choose between less experienced vendors who overpromise in the initial contract, unknowingly setting the stage for a systemic failure in software assurance and lifecycle management.

The economic friction of plugin maintenance and software assurance

The disconnect between a buyer’s expected maintenance cost and a vendor’s actual developer overhead is the primary reason MS Office plugins are typically abandoned within 24 months. The labor-intensive nature of debugging an AI Agent that fails only in a specific build of MS Excel 2019, for instance, far outweighs the typical “Software Assurance” fee structured in a standard SLA. As Microsoft pushes frequent updates to MS Copilot and its core SaaS offerings, the underlying hooks for third-party plugins often break without warning. For the vendor, the cost of continuous refactoring becomes a margin-killing endeavor; for the enterprise, the result is a “broken” AI experience that erodes user trust and stalls the broader digital roadmap.

Also read: AI agents and ERP: Why Singapore businesses must act now

Data silos and the lack of cross-functional context

Beyond the technical fragility of plugins, using MS Office as the primary base for an AI Agent strategy fails because it prioritizes “document-centric” data over “process-centric” data. A document in MS Word or a sheet in MS Excel is often a static output of a much larger business process that lives in your ERP or CRM. When an AI Agent is confined to a plugin, it lacks the deep, transactional context required to make high-value decisions. To move from simple automation to true agency, the AI must reside where the business logic lives—at the core of the enterprise data stack—not at the peripheral edge where information is merely formatted for presentation.

Security, governance, and the shadow AI risk

Security and compliance are the cornerstones of the state-of-art enterprise management system philosophy. Deploying AI through Office 365 plugins introduces a fragmented security perimeter. Each plugin represents a potential endpoint for data exfiltration and a complex challenge for Identity and Access Management (IAM). Managing the permissions of an AI Agent across thousands of individual desktop installations is an administrative nightmare that invites “Shadow AI” into the organization. A centralized AI strategy allows for a single point of governance, ensuring that data privacy and ethical AI guardrails are applied consistently across all business functions, rather than being managed on a per-plugin, per-user basis.

Performance bottlenecks and scalability constraints

Finally, the desktop environment is fundamentally unsuited for the heavy lifting required by modern AI Agent architectures. Plugins share resources with the host application; a complex reasoning task initiated in a plugin can lead to latency, application crashes, and a degraded user experience in MS Outlook or Excel. More importantly, this architecture does not scale. An enterprise-grade AI strategy requires a decoupled, microservices-based approach where the AI’s compute requirements are independent of the user’s local hardware or the stability of a specific office suite. Scale is achieved through cloud-native orchestration, not through adding more overhead to a word processor.

Also read: Why Singapore manufacturers must embrace MES for the future

Conclusion: Strategic alignment for the future-ready enterprise

To lead in the digital economy, organizations must stop viewing the AI Agent as a “feature” of their productivity software and start viewing it as a core component of their enterprise architecture. While MS Copilot provides valuable individual productivity gains, it is not a substitute for a robust, vendor-agnostic AI strategy. By avoiding the pitfalls of MS Office plugins—the versioning traps, the procurement fallacies, and the maintenance deficits—leadership can build a foundation that is resilient, secure, and truly intelligent. The path forward lies in centralizing intelligence at the heart of business processes, ensuring that your AI strategy drives value today and scales for the innovations of tomorrow.

Why we write this article

PRbyAI aims to share updated market news using our team’s tech knowledge, helping B2B customers make informed decisions.

Want updates like this delivered directly? Join our WhatsApp channel and stay in the loop.

This article was shared with us by PRbyAI

We can share your story at e27 too! Engage the Southeast Asian tech ecosystem by bringing your story to the world. You can reach out to us here to get started.

Featured Image Credit: Canva Images

About PRbyAI

PRbyAI is a tech-driven Martech startup leveraging cutting-edge AISEO to help customers generate leads and tap into new markets.

The post The architect’s mandate: Building a resilient foundation for the intelligent enterprise appeared first on e27.

Posted on Leave a comment

Big in numbers, weak in value: The limits of MSME formalisation in Indonesia

Micro Small Medium Enterprise (MSME) plays a vital role in a nation’s economic resilience. In Indonesia, during the monetary crisis in 1997-1999, MSMEs saved the economic situation. According to Indonesia Statistic, although post-crisis, the number of MSMEs declined 7.43 per cent, but the contribution of GDP of MSMEs spiked to 52.24 per cent. On the other hand, the export value rose by around 76.48 per cent. At that time, MSME acted as a shock absorber while larger corporations collapsed. 

Fast forward to today, the total MSMEs in Indonesia are still dominating. ASEAN Investment Report 2021 stated that the number of MSMEs in Indonesia was around 65.45 million, the biggest number among other Southeast Asian countries. 

Source: ASEAN Investment Report 2021

At first glance this sounds like a success story, but the data raises more uncomfortable questions: does quantity still translate to economic growth? Or has the narrative of MSMEs become stagnant, only big numbers but with weak value creation?

When big numbers don’t mean real growth

The dominance of MSMEs is often celebrated as a sign of economic growth. However, most MSMEs in Indonesia remain informal, low in productivity, and highly vulnerable. Many operate without proper bookkeeping, business licenses, or access to formal financing. Indonesia’s Vice Minister of MSMEs, Helvi Moraza, stated that 69.5 per cent of MSMEs are unable to access financing, while 43.1 per cent of them actually require funding to scale up their business productivity.

Source: Accurate Blog Definition of NPL

At the same time, the Financial Services Authority (OJK) reported that MSME Non-Performing Loans (NPL) reached 4.02 per cent in 2024. Based on the credit quality classification above, category four reflects a concerning condition, where borrowers face serious difficulty in predicting repayment and recovery. This figure signals that MSME financing issues are not only about access, but also about sustainability.

Also Read: Why agritech is key to securing long-term food resilience in Indonesia

This situation is driven by multiple structural factors. Cited from Tempo, a joint study by the Mastercard Centre for Inclusive Growth, Mercy Corps, and 60 Decibels identified three main challenges that hinder MSME growth in Indonesia.

  • First is the limited capability to operate digital platforms, even when MSMEs are aware of their benefits.
  • Second is restricted access to business development services.
  • Third is low awareness and weak intention to utilise loans and other formal financial products.

Ironically, business-support platforms and services are already widely available, such as point-of-sale systems, EDC machines, accounting tools, advertising platforms, and digital payment systems. However, most of these services require subscription fees, which MSMEs often perceive as non-essential spending due to limited capital. At the same time, education and socialisation regarding the urgency of these tools remain weak. Government programs such as UMKM Go Digital do exist, but many of them are short-lived. Programs often run for only one to three years before being discontinued, leaving no long-term impact.

Another perspective comes from the National Law Development Agency, which argues that a concrete step to empower MSMEs lies in regulatory reform. Existing regulations need adjustment to reflect current economic and social realities.

This phenomenon resembles an iceberg. What is visible on the surface, large numbers of MSMEs, hides deeper structural problems underneath. Numbers alone mean little without real progress and effective approaches to unlock the actual potential of MSMEs.

Government support: Necessary but not sufficient

The Indonesian government has positioned MSMEs as a national priority. Banks are encouraged to channel financing through programs such as KUR, alongside digital onboarding initiatives, tax incentives, and simplified licensing via the Online Single Submission (OSS) system. These efforts aim to remove administrative and legal barriers. In practice, however, formalisation is often treated as a checkbox rather than a long-term process to upgrade MSMEs into sustainable businesses.

Most policies focus heavily on onboarding MSMEs into the formal system, while paying less attention to maintaining progress, upgrading capabilities, and evaluating outcomes based on clear performance indicators. In reality, MSMEs require continuous guidance to improve productivity, management capacity, and market positioning.

Also Read: Indonesia courts Nvidia and AWS as it eyes a bigger role in global chip supply chains

Without parallel support in productivity improvement, managerial skills, market access, and technology adoption, formalisation risks becoming symbolic rather than impactful.

Is formalisation the right lever?

Formalisation is often positioned as the key to unlocking MSME growth. Once registered, MSMEs are expected to gain access to financing, enter formal markets, and manage their businesses more professionally. Theoretically, this logic sounds legit. In practice, it is incomplete. Formalisation only works when it reduces friction, not when it creates additional burdens. For many MSMEs, formalisation feels intimidating, leading them to remain informal despite potential benefits.

A study by Dr. Shova Thapa Karki and Professor Mirela Xheneti from the University of Sussex found that business formalisation is influenced by whether entrepreneurship is driven by necessity or opportunity. Necessity-driven entrepreneurs are often shaped by structural constraints such as unemployment, poverty, and lack of alternatives, while opportunity-driven entrepreneurs pursue growth, independence, and long-term value creation.

This distinction closely reflects the Indonesian context. Most MSMEs operate primarily to meet basic needs rather than to scale their businesses. This is largely due to the dominance of micro-level enterprises. According to the Indonesian Chamber of Commerce and Industry (KADIN), micro enterprises, both in the agriculture and non-agriculture sectors, account for 99 per cent of total business units. This creates a structural limitation on growth.

An article from The Conversation further highlights that despite their dominance in numbers, micro-level enterprises contribute relatively little to GDP. Their impact on broader economic development remains limited, as business motivation is often individual-driven rather than oriented toward collective or systemic growth.

So, is formalisation the right lever? The answer is conditional. Formalisation can act as a catalyst for access to capital and markets, but it must be supported by a sustainable system. Revising MSME empowerment regulations, strengthening local business communities, implementing continuous monitoring with clear KPIs, and reshaping the perception of financing as an enabler rather than a threat are critical steps. Without these measures, formalisation risks becoming an administrative exercise instead of a pathway toward sustainable MSME growth.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Featured image courtesy: Devi Puspita Amartha Yahya on Unsplash

The post Big in numbers, weak in value: The limits of MSME formalisation in Indonesia appeared first on e27.

Posted on Leave a comment

Financing the real economy: Why Southeast Asia needs capital that listens, not just lends

Across Southeast Asia, I’ve seen how the narrative of innovation often revolves around billion-dollar valuations and high-velocity startups. Yet behind those headlines lies a quieter truth: most of the region’s real economic energy still comes from small and midsized enterprises. The manufacturers, logistics operators, and service providers that keep our cities running.

And it’s precisely these businesses that I see struggling to be heard.

Traditional credit systems still ask SMEs to fit narrow definitions of “bankable.” Venture funds, meanwhile, chase disruption but often overlook durability. Somewhere in between lies the real economy, asset-rich but liquidity-poor, growing but unseen.

The question isn’t whether these firms deserve capital. It’s whether today’s capital still knows how to listen.

From speed to structure

Over the past few years, I’ve watched funding in Southeast Asia grow more polarised. Late-stage deals have surged, but seed and early-stage funding have contracted sharply. The pattern suggests a market that values maturity over momentum, yet it also reveals how little structural support exists for SMEs that never fit the startup mould to begin with.

What these businesses need isn’t another “growth story.” They need financing that aligns with the rhythm of their operations, loans that move with inventory cycles, repayment terms that reflect market volatility, and investors who understand that resilience sometimes matters more than returns.

In my view, the real innovation isn’t speed. It’s in structure.

Also Read: Revisiting “Something Ventured”: What the birth of venture capital still teaches Founders today

The missing ingredient: Trust

Trust may be the least analysed variable in private capital, yet I believe it underpins every successful transaction.

In emerging markets, where data can be patchy and legal frameworks uneven, relationships still matter more than algorithms. Financing models that embed trust, through transparent governance, local presence, and shared accountability, often outperform those that rely solely on credit scores or valuations.

That doesn’t mean abandoning rigour. It means recognising that rigour itself can take different forms: a business owner’s track record, a community’s reputation, or an asset’s proven utility. In many cases, I’ve found that these informal signals of credibility are more predictive of repayment than any spreadsheet.

The capital that listens pays attention to those signals.

Designing for reality

I’ve come to view finance not just as a product, but as a design exercise. We need to prototype new structures, revenue-linked models, milestone-based repayment, or collateralisation through non-traditional assets that mirror how real businesses actually operate.

This is what I call governed capital: money deployed with both structure and empathy. Responsible lending isn’t about avoiding risk; it’s about understanding it. It’s about designing financial systems that reflect the lives and cycles of those they serve, rather than forcing businesses to contort themselves to qualify.

Beyond ESG: Building systems that endure

Much has been said about “impact investing,” but I’ve often seen the region treat it as an aesthetic, a badge of responsibility rather than a redesign of intent. Real impact investing must be slow capital: patient, cyclical, and local.

It should finance the middle of the market, the businesses that hire, train, and sustain, not only those that promise exponential growth. And it should measure success in systems built to endure market shocks, not in exits achieved before the cycle turns.

Also Read: Grants are not just for nonprofits: Why for-profit operators miss out on early-stage capital

That philosophy demands more than new metrics. It demands new mindsets from investors, regulators, and entrepreneurs alike.

What comes next

If Southeast Asia’s next decade is to be defined by capital that listens, I believe a few principles can guide the way forward:

  • Local fluency over global templates. Effective capital requires proximity to markets, to culture, and to people.
  • Governed flexibility. Rules and empathy are not opposites; they are co-designers of trust.
  • Measured ambition. Growth that compounds slowly is often the kind that lasts.

When finance learns to operate at a human scale, SMEs cease to be an afterthought. They become the architecture of regional resilience, proof that in an age of noise, the most transformative capital may be the kind that moves quietly, listens deeply, and stays long enough to matter.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Featured image courtesy: Canva

The post Financing the real economy: Why Southeast Asia needs capital that listens, not just lends appeared first on e27.

Posted on Leave a comment

Why Bitcoin fell from US$100k to mid US$60k amid macro uncertainty

Bitcoin faces a multi-day losing streak that analysts identify as the harshest reset since past major bear markets. The asset peaked above US$100,000 in October 2025 before falling roughly 50 per cent to the mid US$60,000s. A sharp flush to about US$60,000 on 5 February triggered heavy forced selling and extreme options demand for downside protection.

Volatility and derivatives stress levels are at levels last seen during the FTX era and the 2018-style resets. On-chain and valuation metrics have shifted into early bear-market territory. Sentiment sits near extreme fear, with the Fear & Greed Index at 6. This reading marks the second-lowest ever. Key support zones now focus around US$60,000 and roughly US$55,000. Investors watch ETF flows and whether on-chain composite indices recover or slide further toward full capitulation zones.

The streak reflects broad de-risking across spot, derivatives, and ETF flows after a very extended bull run. Analysts at K33 and Bitcoin Magazine describe capitulation-like conditions in volume, funding, and options skew as BTC approached US$60,000. Daily RSI sits near 16. US spot Bitcoin ETFs have seen around US$400 million in weekly net outflows.

A big drop in assets under management from a 2025 peak has removed an important source of incremental demand. This data suggests the market struggles to find buyers at current levels. The structure looks more like the early part of a bear phase than a brief correction. This implies longer, choppy sideways to down price action appears likely.

CryptoQuant’s Combined Market Index blends valuation, profitability, spending behaviour, and sentiment. This index dropped to around 0.2. Analysts linked this zone to the early stages of the 2018 and 2022 bear markets rather than a mid-cycle dip. A separate heatmap of 10 major on-chain metrics shows all key signals in the red band. These signals include trader profit margins and network activity. Conditions remain inconsistent with new highs in the short term.

Realised price tracks the average cost basis of all BTC. This metric currently stands at around US$55,000. Past cycle lows have often formed 24 to 30 per cent below it. This places a potential high-risk, high-reward zone around that area if history repeats. Analysts flag US$60,000 to US$62,000 as a critical support band. K33 work suggests consolidation between roughly US$60,000 and US$75,000 now forms the base case. Deeper downside awaits if US$60,000 fails.

Also Read: Crypto market bleeds US$44B as US$78M Bitcoin liquidations spark panic

Broader market context adds weight to this cautious outlook. Major US stock indices ended slightly higher on February 17, 2026. The session saw the S&P 500 swing between gains and losses as investors grappled with persistent fears regarding AI expenditures. The S&P 500 rose 0.1 per cent to close at 6,843.22. It found support near its 100-day moving average after an initial drop of nearly one per cent.

The Nasdaq Composite gained 0.14 per cent. The Dow Jones Industrial Average climbed 32.26 points to settle at 49,533.19. Financials and real estate each rose approximately 1.1 per cent. In contrast, the energy sector fell 1.4 per cent, and consumer staples dropped 1.5 per cent. General Mills sank seven per cent after cutting its annual outlook. The technology-heavy Nasdaq faced pressure from a 2.2 per cent drop in software-focused ETFs.

Commodities signalled risk-off behaviour. Gold prices plummeted more than two per cent. Prices fell below US$5,000 to settle at around US$4,884 per ounce. Oil prices dropped roughly two per cent to a two-week low. Brent crude settled at US$67.42 and WTI at US$62.33. Reports of a new window of opportunity for a potential nuclear deal reduced safe-haven demand for gold. This also lowered the risk premium on oil. AI anxiety triggered a bout of volatile trading.

Scepticism about tech giants’ ability to monetise their high AI expenditures worried investors. Dip buyers helped indices recover by the close. Liquidity remained thin following the US Presidents’ Day holiday and ongoing Lunar New Year closures in China and Hong Kong. The 10-year Treasury yield edged up slightly to 4.06 per cent. The 2-year yield rose to 3.439 per cent.

Also Read: Markets in freefall: AI fears trigger US$4B Bitcoin ETF exodus

My view synthesises these disjointed signals into a coherent narrative. The Bitcoin reset aligns with broader macro uncertainty. While stock indices closed slightly higher, the underlying volatility suggests fragility. The drop in gold alongside Bitcoin indicates a liquidation of safe havens rather than a rotation into risk. The US$400 million weekly ETF outflows confirm institutional hesitation. Investors need multiple consecutive days of strong inflows to reset the current bearish regime. The realised price near US$55,000 offers a logical floor, yet history suggests prices could dip 24 to 30 per cent below this level.

The BCMI at 0.2 reinforces the bear market comparison. Traders should focus less on picking an exact bottom. Focus remains on whether US$60,000 and the realised price hold. ETFs and on-chain signals must stabilise before optimism returns. The current environment demands patience as the market searches for a true bottom amidst economic crosscurrents.

AI scepticism in equities and crypto derivatives highlights shared sensitivity to liquidity conditions across asset classes. This parallel suggests that the crypto downturn is not isolated from traditional finance movements. Investors observe that doubts about technology expenditure in the stock market mirror the de-risking seen in Bitcoin derivatives.

Both markets react sharply to changes in yield expectations and risk appetite. The 10-year Treasury yield edged up to 4.06 per cent, adding pressure to valuation models for high-growth assets. Higher yields typically reduce the present value of future cash flows for tech firms and diminish the appeal of non-yielding assets like Bitcoin. This correlation strengthens the argument for a cautious approach until yields stabilise.

Nevertheless, the path forward involves navigating choppy sideways action until clear recovery signals emerge.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Featured image courtesy: Canva

The post Why Bitcoin fell from US$100k to mid US$60k amid macro uncertainty appeared first on e27.

Posted on Leave a comment

SusHi Tech Tokyo 2026 returns to spotlight AI, robotics, and urban resilience

The Tokyo Metropolitan Government has announced that SusHi Tech Tokyo 2026 will take place from April 27 to April 29 at Tokyo Big Sight, positioning this year’s edition as its most ambitious yet. Now in its fourth year, the conference has grown into Asia’s largest global innovation gathering, and organisers are aiming to further elevate it as a worldwide platform where innovators converge to design the cities of tomorrow.

Short for “Sustainable High City Tech,” SusHi Tech Tokyo was conceived as a forum to envision, debate and implement future societies shaped by advanced technologies. Over the years, it has evolved into a comprehensive program of intensive sessions, live demonstrations and startup showcases. In 2026, the event will expand both in scale and global reach, welcoming participants from across industries and geographies to experience not only frontier technologies but also Tokyo’s unique cultural and urban strengths.

This year’s programme centres on four key areas: AI, Robotics, Resilience and Entertainment.

AI takes centre stage as a transformative force reshaping industries, work styles and the nature of innovation itself. Leaders from pioneering companies and renowned research institutions will explore how humans and AI can collaborate to build more inclusive and productive societies. The event will feature an AI-focused startup exhibition highlighting university and research spin-offs from across Japan, alongside a pitch contest and partner screenings of award-winning works from an international AI film festival.

Robotics will showcase the rise of “physical AI,” as intelligent machines become more deeply integrated into daily life. Demonstrations will spotlight robots performing a range of tasks designed to enhance convenience, productivity and quality of life, signalling how automation could address labour shortages and demographic shifts.

Also Read: The US$71000 Bitcoin bounce lacks foundation but Japan’s rally has real teeth

Under the resilience theme, SusHi Tech will present technologies to strengthen cities against natural disasters and climate-related risks. Exhibits will focus on earthquake preparedness, flood mitigation and rapid recovery systems.

Attendees can also join site tours of Tokyo’s critical infrastructure, including vast underground flood-control reservoirs that protect the metropolis from river overflows.

In entertainment, the event will explore how technology is reshaping creative industries spanning anime, manga, music and sports. Beyond the main venue, partner events across the city will invite global visitors to engage with Tokyo’s cultural landscape, including a walking event along the KK Line, an elevated expressway undergoing a transformation into a pedestrian space.

SusHi Tech Tokyo 2026 is also set to attract a record number of startups. More than 700 companies from around the world will exhibit, up from 607 last year. The global pitch contest, SusHi Tech Challenge 2026, will see 20 finalists selected from 820 applicants across 60 countries and regions.

A new initiative, “SusHi Tech Global Startups,” will provide intensive support to growth-stage companies through collaboration between the Tokyo Metropolitan Government and ecosystem partners.

Global investors will take the stage to share insights into Japan’s startup landscape and evolving investment strategies, with open meetups designed to facilitate direct engagement.

The conference will also spotlight Japan’s top university startups, innovative SMEs from across the country and student-led initiatives such as “ITAMAE,” where students independently plan sessions and support overseas founders.

With tickets now on sale, including discounted early-bird passes until February 28, SusHi Tech Tokyo 2026 is shaping up not only to be a showcase of emerging technologies but also a statement of Tokyo’s ambition to lead in building sustainable, human-centric cities for the future.

Image Credit: SusHi Tech 2026

The post SusHi Tech Tokyo 2026 returns to spotlight AI, robotics, and urban resilience appeared first on e27.

Posted on Leave a comment

Agentic AI is powerful – but power isn’t product-market fit

OpenClaw has been circulating heavily across tech Twitter and developer communities. Agentic AI. Autonomous assistants. AI that “actually does things”.

The narrative is seductive: AI that doesn’t just respond, but acts. Check your inbox. Runs scripts. Controls systems. Executes workflows. It feels like a glimpse into the future. And in many ways, it is.

But the more important question isn’t whether OpenClaw is powerful. It’s whether power alone is product-market fit.

Infrastructure always comes before interface

Every technological shift follows a pattern. Infrastructure comes first. Interface comes later. Mass adoption follows usability. Monetisation follows adoption.

Linux preceded macOS. Terminal preceded GUI. Self-hosted email servers preceded Gmail. Open-source wallets preceded consumer crypto apps.

OpenClaw sits firmly in the infrastructure phase of agentic AI.

It validates something important: Autonomous AI agents are not theoretical anymore. They are technically viable. That matters. But viability and usability are two different markets.

The installation reality

I tried installing OpenClaw myself.

It took me minutes.

But that is because I have a technical background. I understand environments, configurations, system permissions, and hosting layers. I am comfortable unpacking files and troubleshooting.

Now imagine:

  • A small business owner.
  • A marketing lead.
  • A 50-year-old founder.
  • A creator trying to automate workflows.

Would they self-host? Configure execution permissions? Think about security boundaries? Debug dependency issues?

Unlikely.

This is not a criticism of capability. It is segmentation.

OpenClaw is designed for users who are technically equipped to operate infrastructure-level systems.

That is a niche. And niches are powerful, but they are not the mass market.

Also Read: Generative AI fatigue: Are we over‑automating creativity?

The product-market fit gap

Much of the public discourse makes it sound as if agentic AI is ready to replace assistants tomorrow.

But product-market fit requires more than technical capability.

It requires:

  • Frictionless onboarding.
  • Clear guardrails.
  • Invisible hosting.
  • Managed security.
  • Defined execution boundaries.
  • Support for non-technical users.

Power excites technologists. Simplicity converts markets.

If a user cannot install, configure, and confidently manage a system, adoption slows. And when adoption slows, monetisation follows.

The total addressable market for developer-grade AI is not the same as the total addressable market for consumer-grade AI. And that distinction matters for founders building in this space.

Infrastructure is step one, not the finish line

OpenClaw is not the problem.

It is proof.

It proves agentic AI is real.

But infrastructure alone does not create scale.

Someone will productise this layer. Someone will abstract the complexity. Someone will build guardrails by default. Someone will turn it into something that feels like using an app instead of running a server.

That is when adoption widens.

A case study in evolution

Before Seraphina became a consumer-facing AI assistant, she was my internal system. Powerful. Flexible. Built for me.

If I had released that early version publicly, adoption would have been zero. Not because it lacked capability. Because it required too much configuration.

I understood the parameters. I defined execution rules. I knew where clearance was required. I knew what she should and should not automate. Most users don’t have that clarity yet. So we simplified. We added guardrails. We reduced friction. We abstracted complexity. We made hosting invisible. We prioritised usability over raw power.

The ideology remained the same. The interface changed. That difference is product-market fit.

Also Read: AI in action: How governments are using technology to predict, prevent, and personalise

Automation without process clarity is risk

There is another layer most hype cycles ignore: governance. Agentic AI that can execute commands introduces operational risk if boundaries are unclear.

If someone doesn’t understand:

  • Their workflow.
  • Their approval layers.
  • Their data movement.
  • Their access permissions.

Then full autonomy becomes fragile.

In my own systems, certain actions require explicit clearance. Automation only works safely when processes are clearly defined.

This is why I often say: Automate when you know your process. If the process itself is unclear, automation amplifies confusion. Security risk and process ambiguity become friction points — not growth accelerators.

Not everyone needs to learn everything

There is also a broader founder lesson here.

I recently built a full system using Vibe Coding in under an hour. I signed up and executed immediately.

Others have taken courses on similar concepts and still haven’t built anything.

This is not about intelligence. It is about exposure, comfort, and alignment. Just because a capability exists doesn’t mean everyone must master it.

I cannot run a hawker stall or a beauty salon efficiently. That doesn’t diminish my ability. It means my skill set lies elsewhere.

In every tech wave, there are:

  • Builders (infrastructure experts)
  • Translators (product and interface designers)
  • Users (operators and businesses)

All roles are valid.

And if you’re stepping into deep technical territory, one of the smartest moves is not learning everything yourself, but partnering with someone who already speaks that language.

When I entered education, I partnered strategically. It reduced friction. It accelerated execution. It saved time.

Time is the real currency in technology cycles.

The shortcut is not omniscience. The shortcut is access to experience.

The adoption curve is always slower than hype

Social media compresses perception. When everyone talks about a technology, it feels ubiquitous. But conversation does not equal penetration. OpenClaw excites technologists. Agentic AI excites futurists. Investors see long-term potential.

But mass-market adoption follows a different curve. Infrastructure. Abstraction. Interface. Trust. Then scale.

OpenClaw is step one.

The revolution is real. But revolutions rarely become mainstream overnight.

The opportunity is real — participation is optional

Agentic AI will reshape workflows. Autonomous assistants will become normal.

But not every founder needs to install infrastructure. Not every operator needs to configure agents. Not every business needs to self-host. Some will build engines. Some will productise them. Some will simply use them.

Powerful technology is not automatically mass-market technology.

And that’s not a flaw. It’s a phase.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Featured image courtesy: Canva

The post Agentic AI is powerful – but power isn’t product-market fit appeared first on e27.

Posted on

The “Valley of Death” isn’t a funding problem — it’s a risk design problem

Deep tech startups rarely fail because the science is uninteresting or the problem is irrelevant. Many fail in the narrow stretch between a working technology and a scalable business, when capital runs out, timelines stretch, and risk shifts faster than funding models can adapt. This gap is commonly referred to as the “Valley of Death”.

What makes this phase especially lethal is not a single missing ingredient, but a mismatch between how risk actually unfolds and how it is financed, managed, and priced.

Deep tech commercialisation is fundamentally a risk allocation problem: most models misprice where technology, capital, and time actually fail, so the “Valley of Death” keeps reopening.

The deep tech risk budget

In software, the dominant risk is market risk — most companies die because nobody cares enough to pay, not because the app can’t be built. In deep tech, the risk budget flips: technology feasibility and funding structure dominate, while market risk is often more about timing than demand.

Risk category SaaS intuition Deep tech intuition
Technology risk 10 per cent: Code is almost always buildable. 40 per cent: Physics and scale‑up can fail terminally.
Market risk 50 per cent: “No market need” is common. 15 per cent: Problems are obvious; timing is the uncertainty.
Operating / supply chain 20 per cent: GTM and execution complexity. 15 per cent: Scaling hardware kills many ventures.
Funding risk 20 per cent: Metrics‑driven, staged by growth. 30 per cent: Misaligned with five to seven-year fund cycles.
Any model that doesn’t explicitly decide who owns these risks, when, and with what exit path is effectively flying blind.

How common models shift (or ignore) risk

  • Traditional VC in deep tech: Spreads bets and accepts high failure rates, but fund timelines (5–7 years) clash with 10+ year deep-tech gestation. Technology and funding risk compound, and companies often die with working prototypes but no runway.
  • Corporate venture and pilots: Corporations help with adoption, but operating and technology risk remain with under‑resourced startups. Timing risk is severe: slow procurement and internal politics can strand ventures mid‑pilot.
  • University spin‑outs and TTOs: Science is validated, but scale‑up, supply chain, and regulatory risk are under‑priced. Many spin‑outs stall between lab prototype and industrial‑grade product.
  • Venture studios: Studio playbooks built for software underestimate capex, regulatory timelines, and hardware complexity when applied to deep tech.

Also Read: Dow hits record high, Nasdaq tumbles 0.6 per cent, Bitcoin miners flee: Signals deeper stress than price alone

Across these models, the Valley of Death persists because risk is assumed rather than designed.

A risk‑first, “foundry” approach

There is a class of foundry‑style models that start from the risk budget and work backwards:

  • Enter at high TRL, avoiding pure science discovery risk and focusing on industrialisation.
  • Launch ventures with pre‑sold demand and day‑one revenue to compress market and funding risk.
  • Centralise legal, finance, and supply chain as a “business‑in‑a‑box” to reduce operating risk.
  • Architect each venture around a specific 2–3 year path to liquidity so capital and timelines align.

Dragonfly Ventures and its Accelerated Deep Tech Commercialisation (ADTC) model is one example: it inverts the traditional risk stack by sourcing proven assets, securing day‑one customers, and designing for near‑term exits, turning startup success from a low‑probability bet into something closer to a yield problem.

What Southeast Asia needs to decide

For Southeast Asia to unlock its deep tech potential, the ecosystem will need to make explicit choices:

  • Universities: lean into technology risk and push assets to higher TRLs before spin‑out.
  • Corporates: underwrite timing and market risk with real offtake and industrial partnerships, not just pilots.
  • Funds and foundries: innovate on ownership, liquidity, and operating models to ensure deep tech aligns with private capital cycles.

The Valley of Death won’t close by “more funding” alone; it will close when the region treats risk as a design variable in how we build, fund, and scale frontier tech companies.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Featured image generated using AI.

The post The “Valley of Death” isn’t a funding problem — it’s a risk design problem appeared first on e27.

Posted on

The generalist marketing agency is dead: Navigating the ‘great reset’ of 2026

If 2025 felt like an uphill battle for marketing agencies, the data confirms it wasn’t just in your head. It was a structural inflection point.

Across the industry, the sentiment is identical: the old agency model, built on selling hours from more cost-efficient economies, execution, and “activity reports,” is collapsing.

As noted in a recent Entrepreneur analysis, agencies are squeezed between shrinking client budgets and skyrocketing expectations. The days of justifying a retainer with “we increased engagement by 40 per cent” are over. If that engagement doesn’t map directly to revenue, pipeline velocity, or measurable business growth, clients are walking away.

But as we settle into 2026, the winners are already emerging from the wreckage. They aren’t just “surviving” the AI revolution-they are weaponising it to move upstream.

The commoditisation of “doing”

The root of the crisis is the commoditisation of execution. Basic content creation, routine analytics, and campaign optimisation services that once commanded premium fees are now being automated in-house or handled by low-cost AI agents.

According to 2026 market predictions from the World Federation of Advertisers, the industry focus has shifted entirely from “efficiency” (doing things faster with AI) to “effectiveness” (delivering better outcomes). Agencies that stick to the “we do everything” generalist model are finding themselves in a race to the bottom on price.

The era of the linear workflow is over. Successful agencies are dismantling the old ‘strategy-to-creative-to-media’ handover in favor of agile ‘pods.’ By utilising predictive modelling, these integrated teams ensure that strategy and execution happen in tandem, not sequentially.

Also Read: The era of ‘black box’ pricing is over: Why transparency is the new currency in B2B marketing

The new currency: Strategic creative planning

If execution is cheap, insight is priceless.

The agencies thriving in 2026 have pivoted to selling strategic creative planning. They don’t just “make ads”; they use AI to decode cultural nuances, competitor strategies, and audience motivations before a single dollar is spent on media.

This shifts the agency’s value proposition from “outsourced hands” to “market intelligence partner.” This is where the next generation of AI tools is bridging the gap, allowing agencies to reclaim their premium status by offering predictive certainty rather than just creative guesses.

How AI is rewiring creative strategy

To survive, agencies must integrate human intelligence with AI capabilities that go beyond surface-level metrics. We are seeing a rise in platforms specifically designed to handle this “heavy lifting” of strategic analysis.

For instance, AI creative planning solutions are allowing agencies to reduce campaign research time from weeks to minutes. By using large language models to analyse massive datasets, agencies can now predict click-through rates with significantly higher accuracy than human instinct alone, effectively modelling success before launch.

This approach aligns with the “Psychographic Profiling” methodology discussed in Plug and Play APAC’s coverage of AI in marketing. Instead of broad demographic targeting, agencies can now identify thousands of micro-attributes and behavioural preferences, allowing them to cluster audiences and generate content derivatives that resonate on a personal level.

Also Read: Recognised by Google DeepMind, SOMIN aims to redefine AI-powered marketing

Furthermore, academic research demonstrates how AI can be used to “decode” competitor strategies. By analysing high-performing content across an industry, these tools can generate data-backed creative briefs (user stories), giving agency strategists a blueprint for what is working right now in the market.

The 2026 mandate

For agency leaders, the path forward is clear but difficult. 2026 is a reset year.

You must stop fighting the old game of “billable hours for execution.” The new reality demands that you position yourself as a partner who owns the outcome.

  • Double down on specialisation: Be the absolute best at understanding one vertical’s pain points.
  • Invest in “pre-flight” intelligence: Use AI to validate creative strategies before production.
  • Sell the roadmap, not just the car: Clients will pay for the strategy that ensures the execution works.

The “service provider” model is dead. Long live the strategic partner.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Featured image courtesy: Canva

The post The generalist marketing agency is dead: Navigating the ‘great reset’ of 2026 appeared first on e27.

Posted on

Ethereum leads fragile crypto rebound as markets navigate holiday thin liquidity

While traditional US financial markets are closed for the Presidents’ Day holiday, the cryptocurrency market continues to operate relentlessly. Global equity futures trade with light volumes, constrained further by Lunar New Year closures across mainland China and Hong Kong. Yet crypto never pauses.

The total market capitalisation rose 0.74 per cent over twenty-four hours to reach US$2.36 trillion. This modest gain reflects a market searching for direction amid thin liquidity and conflicting signals. My view is that this movement represents not a decisive turnaround but a fragile, technical rebound driven by specific ecosystem dynamics rather than broad macroeconomic conviction.

Ethereum’s relative strength provided the primary catalyst for today’s advance. The Ethereum Ecosystem category climbed 1.16 per cent, notably outpacing the broader market’s 0.74 per cent gain. This outperformance follows recent commentary from Vitalik Buterin, emphasising Ethereum’s base-layer neutrality, and from Coinbase CEO Brian Armstrong, noting that retail investors continue to accumulate ETH with diamond hands.

After six consecutive red monthly candles and a period of historic underperformance, Ethereum appears to be executing a technical bounce from deeply oversold conditions. The narrative surrounding the protocol has shifted subtly toward constructive long-term fundamentals, which seems to have encouraged spot buyers to step in at current levels.

However, this rebound remains precarious. Ethereum must maintain a price above the psychological US$2,000 threshold to sustain momentum. A failure to hold that level could swiftly erase today’s gains and reintroduce downward pressure.

Several secondary factors contributed to the market’s upward drift. Bitcoin exchange-traded funds recorded a net outflow of US$98.86 million, indicating persistent institutional caution toward the largest cryptocurrency. In contrast, Solana ETFs attracted a modest $2.34 million in inflows, suggesting investors are selectively rotating capital toward alternative layer-one protocols. This divergence highlights a market in transition, where capital flows are becoming more discerning rather than broadly risk-on.

Meanwhile, the Fear and Greed Index inched higher from 12 to 13, a marginal improvement that nonetheless leaves sentiment firmly in the Extreme Fear zone. This slight uptick implies the current bounce is fragile, likely driven by short-term positioning adjustments rather than a fundamental shift in investor psychology. The market’s weak eight per cent correlation with Gold further confirms that today’s move is crypto-specific, not a reflection of broader safe-haven or inflationary trends.

Also Read: Crypto market bleeds US$44B as US$78M Bitcoin liquidations spark panic

The near-term trajectory of the cryptocurrency market hinges on several technical levels and external catalysts. The immediate resistance sits at the US$2.37 trillion mark, which represents the 78.6 per cent Fibonacci retracement of the recent swing high to low. A daily close above this level could open the door to a relief rally targeting US$2.53 trillion. Conversely, the market must defend the US$2.17 trillion support, which marks the yearly low established on February 6.

A break below that floor would likely renew bearish momentum and test lower liquidity zones. Beyond price action, participants should monitor commentary from Federal Reserve speakers for any shifts in interest rate expectations. Changes in liquidity sentiment could rapidly alter the risk calculus for digital assets, especially in a holiday-thinned trading environment where modest order flow can produce exaggerated price moves.

From my perspective, today’s price action warrants cautious interpretation. The advance lacks the breadth and volume conviction that typically confirms a sustainable trend reversal. Ethereum’s leadership is encouraging, particularly given its oversold technical setup and improving narrative backdrop, but the broader market remains vulnerable to renewed outflows from Bitcoin ETFs and lingering fear among retail participants.

The selective inflow into Solana ETFs suggests a maturing market in which investors differentiate among protocols based on fundamentals rather than moving in unison. This selectivity is healthy in the long term but can produce choppy, range-bound price action in the near term. I believe the current environment favours patience over aggression. Traders should watch for confirmation above the US$2.37 trillion resistance before committing to a long position, while maintaining awareness of the US$2.17 trillion support as a critical risk-management level.

The cryptocurrency market’s resilience during traditional market holidays underscores its unique, always-on nature. Yet this constant operation can also amplify volatility when liquidity is thin and catalysts are scarce. Today’s modest gain, driven by Ethereum’s technical bounce and selective altcoin demand, offers a tentative reprieve for bulls but does not resolve the underlying tensions of persistent ETF outflows and extreme fear sentiment.

Also Read: Crypto market cap drops to US$2.3T as Fed rate cut hopes fade after hot jobs report

The path forward likely depends on whether spot buyers can consistently defend the US$2.17 trillion to US$2.37 trillion range. If they succeed, a relief rally toward US$2.53 trillion becomes plausible. If they fail, residual leverage and continued institutional caution could trigger another leg lower. In my assessment, the balance of evidence points to a market in consolidation, searching for a clearer macro signal or a sustained shift in institutional flows to establish a more durable direction.

Investors should approach this environment with disciplined risk management and a focus on high-conviction narratives. Ethereum’s recent outperformance, supported by protocol-level developments and accumulation by committed holders, presents a compelling case for selective exposure. However, the broader market’s dependence on Bitcoin ETF flows and macro liquidity conditions means that any single asset’s strength can be quickly overwhelmed by systemic headwinds.

The coming days will likely test whether today’s bounce can evolve into a more robust recovery or remain a fleeting pause within a larger corrective phase. For now, the cryptocurrency market offers a lesson in patience, where waiting for confirmation at key technical levels may prove more rewarding than chasing momentum in a landscape still defined by caution and selectivity.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Featured image courtesy: Canva

The post Ethereum leads fragile crypto rebound as markets navigate holiday thin liquidity appeared first on e27.

Posted on

Rethinking value in B2B services: Why real results don’t happen overnight

In B2B services, people often expect value to appear instantly: a sudden jump in numbers, leads, partnerships, or revenue.

But the reality is simpler and less glamorous: real transformation happens through process, clarity, systems, and consistent execution.

Hype and promises of overnight results miss the point. What actually helps businesses operate better, grow faster, and avoid costly mistakes is structure. And because this kind of value builds over time, it is often misunderstood or measured incorrectly.

The illusion of instant results

Short-term metrics can be seductive. They promise clarity and control, but they often measure activity, not progress. When service providers chase quick wins, they risk optimising for the wrong outcomes — speed over substance, visibility over value.

Common pitfalls of the “instant results” mindset:

  • Overpromising outcomes that can’t be sustained
  • Prioritising short-term KPIs over long-term growth
  • Ignoring the deeper systemic changes clients actually need
  • Undermining trust when results plateau after early gains

Redefining value in B2B relationships

When a company engages a service provider, they’re not buying hours or slides. They’re buying movement — the shift from where they are to where they want to go.

Value in B2B services isn’t a transaction; it’s a transformation. It emerges from shared understanding, consistent delivery, and the ability to adapt together over time.

A campaign or strategy engagement is valuable when it brings more benefit than what the client invested. And value isn’t just financial. It includes time saved, confusion removed, manpower reduced, and opportunities created.

If you avoid months of trial-and-error, cut wasted spending, avoid bad vendors, and land in a clearer, faster path: that’s meaningful ROI.

Also Read: Why your 50s are the perfect time to start a business

True value comes from:

  • Strategic alignment: Understanding the client’s real business drivers, not just their immediate pain points.
  • Capability building: Helping clients grow their own capacity to sustain results.
  • Iterative improvement: Using feedback loops to refine and evolve solutions.
  • Partnership mindset: Treating success as mutual, not one-sided.

This is the foundation of value pricing: You pay for transformation, not just activity.

Value comes from outcomes, not optics

The real test of value is what remains after the engagement ends.

A business has grown in capability. A system now runs where there was chaos. A team gains direction instead of confusion. A market strategy becomes clear instead of abstract. A partnership pipeline lives on instead of dying after one attempt.

Real value is a trajectory shift, not a moment. It’s the difference between constantly improvising and finally having a system.

The patience of progress

Real change takes time because it involves people, systems, and culture. The most effective B2B service providers know how to balance urgency with patience — delivering early wins while laying the groundwork for lasting impact.

Ways to build sustainable results:

  • Set expectations early about the timeline for meaningful outcomes.
  • Combine short-term deliverables with long-term capability goals.
  • Use transparent communication to show progress, even when results are still forming.
  • Celebrate learning milestones, not just performance metrics.

The long game of trust

Trust compounds over time. When clients see consistent effort, honest communication, and steady improvement, they become partners in progress. That trust becomes the foundation for deeper collaboration, innovation, and shared success.

Also Read: The architecture of bad deals: Moral hazard in modern business

The intangible value: What it feels like to work with a good partner

Every great service business carries an invisible asset — its ethos.

For us, value is not only in the tangible deliverables clients receive, but in the experience of working with a partner who is honest, transparent, and aligned with their success.

We believe in clarity instead of confusion. We break down complex processes into understandable steps and ensure the client always knows what is happening, who is doing the work, and why decisions are made.

We believe in transparency instead of rent-seeking. Our pricing is clean and direct. We don’t inflate costs or hide commissions. Most of the client’s money goes directly into the work — researchers, vendors, outreach teams, content creators — not into layered markups.

We believe in fairness. Vendors are paid properly; clients receive fair value; the ecosystem grows on trust instead of exploitation.

And we believe in partnership. We don’t take work we can’t deliver. We don’t overpromise. We don’t disappear. We stay accountable from day one to the last milestone.

For many clients, these intangibles — honesty, clarity, competence — are worth more than any deliverable.

Why this matters: The B2B world is broken

Much of the B2B service ecosystem is built on opacity. Consultancies overcharge and underdeliver. Agencies outsource everything and hide their vendors. Freelancers disappear after payment. “Strategy decks” look impressive but produce nothing.

Globaloca was created as a counterpoint. The focus is not on selling time but on enabling progress. Emphasis is placed on clarity, structure, and repeatable systems rather than ad hoc decision-making or experimentation.

The platform provides transparent pricing, verified vendors, multiple quotes, milestone-linked payments, and performance tracking.

The underlying belief is that value in B2B services should come from demonstrated competence, transparency, and execution rather than promises or presentation.

Because the B2B service industry doesn’t need more noise. It needs a trust infrastructure.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post Rethinking value in B2B services: Why real results don’t happen overnight appeared first on e27.