Posted on Leave a comment

Funding in a bear market: What investors are looking for now

Funding in a bear market entails a focus shift from hype to the hard facts of business. According to Crunchbase statistics, North American venture investments declined by 37 per cent in 2023.

This can be attributed to wariness in general. During this period, investors remain vigilant around startups. Despite still having money to invest, there is now a push towards resilient businesses with models and growth trajectories.

Similarly, as noted by an investor, Trace Cohen, an economic downturn sees investors “…investors…are on the lookout for resilient companies that can weather economic downturns”.

In essence, investors today prefer startups whose profitability prospects, revenues, and capital plans are compelling for funding, which are companies that will thrive even when economies are stagnant or dropping.

Back to fundamentals: Profit and revenue growth

For instance, whereas in a bull market, investors were mainly seeking growth, and growth only, in a bear market, no matter how good a growth prospect is, it is not enough for profitability, and good business models are the key. Indeed, as one analysis points out, VCs have “put[ a] laser focus on profitability and the sustainability of startup business models when making investments”.

The importance of profitability is further underscored by Erika Knierim, startup attorney, as she says, “In a bear market, investors seek out businesses with robust value propositions and clear paths to profitability”.

In other words, will this startup actually survive or even prosper without its limitless access to investment dollars? Founders should emphasise their actual revenue earned. No more impressing with meaningless metrics like “users” or huge numbers thereof.

Pitch decks now must demonstrate “How do you make money? And why will people pay you for it?” Investors will be looking at current or near-future revenue generation and strong profit margins. In down markets, “cash is king.” Firms with more cash and greater runway demonstrate better financial management.

Capital efficiency and runway

Closely related to this is capital efficiency. Investors these days ask startups to make every dollar count. “Investors favour capital-efficient companies,” says venture advisor Lance Cottrell, because any startup that burns huge amounts of money before reaching market may die if more funding dries up. In a bear market, VCs expect longer runways, often 24 to 36 months of cash, to avoid raising in a down cycle.

This forces founders to either raise more money at the cost of diluted equity or cut expenses and growth plans. Investors reward startups that can do more with less, such as outsourcing production or focusing on minimum viable features.

As Knierim puts it: “Cash is king in a bear market…Investors will appreciate a lean operation that maximises capital efficiency”.

Also Read: Seizing opportunities: Accelerators as a strategic choice in bear markets

Revenue traction and growth metrics

Of course, profitability is important, but there is also growth momentum. So, investors will be interested to know that a startup’s product is gaining traction and users at a good rate.

However, unlike earlier, today there is a need to demonstrate this growth momentum through financial metrics. For example, revenue growth rate, retention, etc., are some of the aspects that VC investors focus on.

Within pitches, it seems that slide decks have become slightly less detailed, particularly when it comes to a now-at-times included mini “why now” market section, as TechCrunch has reported that entrepreneurship has shifted focus to time and traction for attracting investors.

Within industries that are currently favourable, pitches for startups within categories such as AI, fintech, and climate technology can use the support that comes from being within a popular sector. However, within such an industry, investors still expect to see data: “Many seed-stage startups…raise capital by reaching out to fewer than 50 investors.”

Strong team and execution ability

Investors will naturally look at the team; in a bear market, the founding team may be the deciding factor. VCs will take more risk in a bear market, but value experience in the founding team in terms of execution or in the field when the risk is higher in a bear market. The VCs will ask hard questions about the founding team and the way the company has advanced. Cottrell believes founders should welcome those hard questions.

The fact that one has the right calibre of staff and mentors helps reduce uncertainty among investors. Furthermore, being able to demonstrate your startup skills, like being able to pivot or reduce costs while keeping the company alive, helps.

As one expert says, “VCs are looking more and more for companies ‘built to last’ with strong balance sheets and contingency plans. If your team can confidently communicate their milestones, spend, and projections, this helps build trust.”

Market opportunity and differentiation

Even in a market downturn, market opportunity does matter. Investors fund only startups that solve pressing problems or have an advantage over their peers. During periods of scarce funding, competitive differentiation, technology, partnership, or focus on a niche becomes critical.

For instance, focusing on a specialised market segment today can help a pitch stand out in a crowd. Startups with unique IP, regulatory barriers, or locked-in customer contracts could justify valuations even in a bear market.

Also Read: Thriving when markets tank: Strategic lessons from history’s bear cycles

Importantly, investors assess the size of the addressable market differently now: they prefer defensible markets over just big ones. A large but fractured market may be less attractive than a small market that a startup can dominate. As one VC advises, “be very clear on your end-user and why they will pay now”.

Realistic valuations and terms

Overall, valuations are generally lower during a bear market, and terms are tougher. Founders can expect more serious due diligence and a term sheet reflecting the situation at this point in time.

According to Moonfare’s analysis, startups are often facing “down rounds” and more burdensome deal terms in this environment; investors may demand board seats or liquidation preferences and pay-to-play provisions.

While this might be painful for founders, taking a fair valuation now can preserve more equity in the long run. The key is pricing to market reality. As Robin Guo said, “Don’t raise based on ego, raise based on reality“. Set a pre-money valuation that reflects recent deals and your traction. An investor-friendly cap table today can pay dividends later when the markets recover.

Adapting fundraising strategy

Even startups must make some changes in their way of seeking funding. In an economic slowdown, it will take longer to raise the next round of funding. Hence, startups should focus on building relations. To do this, startups should reach out to investors frequently but briefly, attend every possible meeting in person, and reach out to a wider set of investors.

The founders should also consider alternative financing options, which include grants, as well as corporate investment or bootstrapping in extreme cases. As a matter of fact, every founder needs a ‘plan B’ for accessing funds.

As a seasoned entrepreneur states: “During difficult times, I am far more likely to invest in a company that will use my money to grow rather than one that uses it just to survive”. In simpler words, explain how you plan to use the finances for accelerating growth, as opposed to mere survival.

Conclusion

Funding is not easy in a bear market, although some startup opportunities will emerge depending on how well they fit into the new criteria set by investors. While a bear market is characterised as conservative, startups that offer value and cost efficiency, and have shown how returns can be achieved, will impress investors.

We’ve already learned that, for funders, the same basic principles that are important across any given market have now become non-negotiable. By focusing pitches around profitability, traction, and growth, founders signal that they get it. Ultimately, by doing so, they ensure that they will not only make it through this bear market but also come out even stronger when better times return.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

The post Funding in a bear market: What investors are looking for now appeared first on e27.

Posted on Leave a comment

From hardware to trustware: How cyber passports will prove digital trust

Southeast Asia has long been the engine room of the global electronics trade. From advanced semiconductor facilities in Malaysia to precision engineering hubs in Singapore and Vietnam, the region built its reputation on sheer manufacturing excellence. For decades, saying something was made in SEA meant it had physical quality, scale and reliability.

But how we define quality is changing fast. Today, you can build a product with flawless hardware and world-class engineering, yet the global market will still reject it if you cannot prove its digital resilience.

We are watching a massive non-tariff barrier rise around the world’s most lucrative markets. This isn’t about quotas or taxes. It really boils down to trust. With the European Union enforcing the Cyber Resilience Act and Singapore pioneering the Cybersecurity Labelling Scheme, the message is loud and clear that digital security is now just as critical as physical durability.

Crucially, the scope of these rules is much bigger than most founders realise. The EU’s CRA deliberately uses the term products with digital elements. This means the regulatory net isn’t just catching smart TVs and Wi-Fi routers anymore. It covers everything from physical IoT hardware to standalone software, firmware and mobile apps.

For manufacturers and developers in our region this is way more than a compliance hurdle. It is a strategic opportunity. By mastering digital trust Southeast Asian tech companies can solidify their position not just as producers but as leaders in the next generation of global technology.

The single-entry visa problem

Right now the global industry is trying to solve a 2026 problem using tools from the 1990s.

Manufacturers face a genuine regulatory tsunami. Between the EU’s RED-DA, the UK’s PSTI and the US Cyber Trust Mark, there are over 40 distinct standards globally. The current approach to handling all this compliance is incredibly fragmented. You test a product for one specific market, get a PDF certificate and basically stuff it in a drawer.

Think of that PDF compliance certificate like a single-entry visa. It gets your product into one country for one specific trip at one exact moment in time. If you want to sell that exact same smart thermostat or software suite in Germany six months later, that old visa is probably useless because someone discovered a new vulnerability in a third-party code library you use.

This approach is entirely brittle. It forces engineering and compliance teams to scramble endlessly while filling out massive spreadsheets and chasing third-party labs every time they want to enter a new market. It is also wildly expensive and often costs tens of thousands of euros per product. Worst of all, it doesn’t actually prove the device or software is safe today; it only proves it was safe on the day the lab tested it.

Also Read: When AI starts acting, who is responsible? Rethinking trust in the age of agents

Moving toward cyber passports

To fix this mess, we need to completely stop thinking about compliance as a static document. We need to start treating it as a core product characteristic.

This is exactly where the industry is heading right now to establish true digital trust. The vision we are moving toward is a future where every single product with digital elements you ship carries a cyber passport. While we are still building the infrastructure for this reality today, the destination is incredibly clear.

Unlike a static PDF or some generic digital ID, a cyber passport would be a dynamic and product-centric vault that travels with the product throughout its entire lifecycle. It would securely hold your third-party lab evaluations, your software bill of materials and your self-declarations all in one connected place.

We are already seeing the groundwork for this industry shift being laid through mutual recognition agreements. Singapore has shown incredible leadership here by establishing agreements with places like Finland and Germany. This essentially means a product earning a Singapore CLS Level 4 label should be recognised in Europe without the manufacturer having to start the whole testing process from scratch.

The ultimate goal of a cyber passport is to digitise and scale exactly this kind of portability. Once fully realised, they will act as universal translators for trust. When a German regulator or a Japanese buyer asks if a product is secure, a cyber passport won’t just hand them a dusty PDF. It will provide verified and up-to-date proof that the technology actually meets local requirements based on the credentials it already holds.

Treating compliance like a lifestyle

Of course, a passport is pretty useless if the ID photo is ten years old. Trust has an expiration date.

The biggest mistake I see organisations making is treating compliance like cramming for a final exam. They rush to fix vulnerabilities right before a product launch, get their official stamp and then completely ignore security until the next audit rolls around.

Regulations like the CRA in Europe are actively killing this model. They legally mandate that you manage vulnerabilities for the entire support period of the product. You cannot just pass a compliance test once. You have to live it every single day.

This reality requires a massive shift toward continuous compliance operations.

Emerging maturity frameworks like PSCOPE are helping organisations figure out exactly where they stand today so they can prepare for tomorrow. At the initial level, you might be managing compliance via messy email threads and ad-hoc checks. But at an optimised level, you have real-time monitoring in place. When a vulnerability is found in a third-party library you use, your system automatically alerts you, updates your risk register and flags that specific product’s future cyber passport profile as needing attention.

This isn’t just about avoiding regulatory fines. It is about keeping your operational sanity. By integrating compliance into the daily rhythm of product development, much like how software teams track their velocity, security becomes a quiet background hum rather than an exhausting fire drill.

Also Read: Security, trust, and the future of finance in an AI-driven world

The rise of agent-to-agent procurement

Why does all this matter right now? Because the buyer is fundamentally changing.

We are moving incredibly fast toward an agent-to-agent economy. In the very near future, B2B procurement won’t involve a human analyst sitting at a desk reading your user manual to verify your encryption standards.

A procurement AI in Jakarta looking to source thousands of connected sensors or software licenses will simply query your manufacturer AI agent. It will ask to see a cyber passport for the product. It will check the digital signatures verified by labs. It will confirm that your continuous monitoring is active and healthy. And it will make a purchasing decision in a matter of milliseconds.

If your product’s trust data is locked away in a PDF on someone’s hard drive, you won’t even be invited to the negotiation table.

Digital trust is the new currency

Southeast Asia has spent decades building a global reputation for manufacturing excellence. The next decade will undoubtedly be defined by digital trust.

The regulations coming out of Brussels and Singapore are not just bureaucratic hurdles. They are market filters. They will wash away any products that cannot demonstrate true resilience and leave the market wide open for high-trust actors.

The entire tech ecosystem is moving toward a reality where digital trust is verified instantly through cyber passports. By adopting a continuous operations mindset today and preparing your product lines for this future, you aren’t just ticking a regulatory box. You are minting the only currency that actually matters in the modern digital economy.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

The post From hardware to trustware: How cyber passports will prove digital trust appeared first on e27.

Posted on Leave a comment

Shadow automation: The new insider risk

Shadow IT used to be easy to picture. Someone signed up for an unapproved SaaS tool, stored sensitive data in it, and security found out later. That pattern still exists, but it is no longer the main story.

The bigger shift is shadow automation. Employees are quietly building automation pipelines across tools the company already uses, plus whatever glue they can access. The result is not a new “app” to discover. It is a new set of data flows and actions that execute inside your environment with limited oversight.

This is why insider risk feels different right now. It is less about a single person doing something bad, and more about ordinary productivity behaviour creating persistent, privileged pathways that no one owns end-to-end.

The new shape of shadow work

Automation has become the default way modern work gets done. People connect forms to spreadsheets, tickets to chat, CRM fields to email sequences, and alerts to on-call rotations. They do it because it saves time and reduces manual errors.

The problem is that the easiest path is rarely the safest path. A “quick workflow” is often built with broad permissions, long-lived tokens, and vague ownership. It runs quietly in the background, sometimes for years, long after the original creator has moved on.

Shadow automation is the same impulse as shadow IT, but with more leverage. It touches multiple systems at once, moves data automatically, and can trigger actions without a human present.

Why automation becomes an insider risk even without malicious intent

Security teams are used to controlling people. Policies, training, approvals, and monitoring are built around human behaviour. Automation bypasses that assumption.

A person can only export so much data in a day. A workflow can export continuously. A person might hesitate before sending sensitive information to an external destination. A script will do exactly what it was told, every time, even if the context changes.

The risk compounds when automation is created by people who are not thinking like engineers. They are not wrong for that. It is simply not their job. But it means basics like least privilege, error handling, logging, and key rotation are often missing.

When something breaks, it usually breaks silently. When something is abused, it often looks like legitimate API activity.

Also Read: Trust by design: Why cybersecurity is the new economic backbone

Where shadow automation hides

Most organisations still look for shadow IT through app inventories and procurement controls. That approach misses the reality of automation because the components look “approved” in isolation.

A workflow tool might be sanctioned. A cloud storage platform might be sanctioned. An internal API might be sanctioned. The risky part is the chain and the permissions that connect it all.

You see shadow automation in personal scripts scheduled on laptops or jump boxes, ad-hoc serverless functions created for a project, webhooks that forward data to external endpoints, and AI agents connected to corporate systems to “help” with tasks.

The common pattern is that automation inherits trust. It uses valid tokens, valid accounts, and valid access routes. That is exactly what makes it hard to see and easy to underestimate.

The blind spot security keeps stepping into

Traditional insider risk programs tend to ask, “Who accessed what?” Shadow automation forces a more uncomfortable question: “What is acting on our behalf, and under whose authority?”

That second question exposes gaps in ownership and lifecycle. Who is responsible when the workflow runs at 2 a.m.? Who gets the alert when it fails and retries? Who reviews its permissions when systems change? Who revokes access when an employee leaves?

If there is no clear answer, you do not have an integration. You have an unmanaged privileged actor.

What “good” looks like without killing momentum

The goal is not to ban automation. If you try, you will create the worst possible outcome: the same automation, but quieter and harder to govern. The goal is to make safe automation easier than unsafe automation.

Start by treating automation as an asset class. That means you maintain an inventory of workflows, scripts, agents, and connectors that can access sensitive systems. You do not need perfection on day one. You need a place where ownership and intent are recorded and can be reviewed.

Next, focus on identity, because automation is identity at scale. Most automation risk is permission risk. Reduce broad scopes. Avoid long-lived keys where possible. Prefer managed identities and short-lived tokens. Make sure every non-human identity has an owner and a reason to exist.

Then address data movement explicitly. In many environments, data is not lost because storage was insecure; it is lost because it was copied into the wrong place as part of a “helpful” workflow. Decide which data types are allowed to flow into which destinations, and enforce it at the connector level where feasible.

Finally, bring change control to the places where it matters. Critical automations should have versioning, basic testing, and a kill switch. Even if the automation is “no-code,” it still needs a lifecycle. The more business-critical the flow, the closer it should look to a software discipline.

The practical first-quarter plan

If you want to reduce risk quickly, do three things in the next quarter.

First, identify your top automation surfaces. Pick the tools and platforms where automations are most likely to exist, and require owners to register anything that touches sensitive data or privileged systems.

Second, implement permission hygiene for automation identities. Review high-privilege tokens and connectors. Remove legacy access that no longer has a clear business justification. Put an expiration expectation on credentials that currently live forever.

Also Read: Cybersecurity: The evolution from digital safeguard to economic governance

Third, improve detection by looking for automation patterns rather than user patterns. Pay attention to unusual frequency, unusual destinations, and unusual chaining across systems. The signal is often not “a weird login,” but “a normal call happening at an abnormal rate.”

The cultural piece everyone avoids

Shadow automation is also a trust issue. Employees automate because they are trying to be effective, and often because official paths are slow or unclear. If security shows up only as a blocker, people will route around it.

A mature approach treats automation builders as partners. Give them safe defaults, clear guardrails, and lightweight ways to get approval for higher-risk workflows. Create a path where someone can say, “I built this,” without fearing punishment.

That is how visibility improves. And visibility is the prerequisite for control.

Closing

Shadow IT was about tools. Shadow automation is about power. It turns everyday access into repeatable execution across systems, often with more privilege and less oversight than anyone intended.

If you want to modernise insider risk, stop focusing only on what employees install. Start focusing on what runs on their behalf. The organisations that do this well will not slow down innovation. They will make automation safer, more observable, and easier to trust.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

The post Shadow automation: The new insider risk appeared first on e27.

Posted on Leave a comment

The new cybersecurity battlefield: Protecting trust in the age of AI agents

AI agents and chat interfaces are no longer limited to answering questions or recommending content. They increasingly act on behalf of users—approving transactions, scheduling actions, filtering information, and making decisions that once required human judgment. This shift is subtle but profound. When systems act for us, cybersecurity is no longer just about protecting data; it becomes about protecting trust.

When automation enters the workflow

In many organisations, AI agents are introduced to improve speed and efficiency. Customer support bots resolve tickets. Financial systems flag or approve transactions. Internal copilots summarise meetings and suggest decisions. At first, these tools feel like assistants. Over time, they become delegates.

The transition often happens quietly. A system that once suggested an action is now executing it. A chatbot that once escalated issues now resolves them autonomously. This is where the security conversation usually lags behind the product decision.

The moment trust becomes a concern

Trust issues tend to surface only after something goes wrong. A transaction is approved that should not have been. An automated message shares sensitive information. A system makes a decision that no one on the team can fully explain.

What makes these incidents different from traditional security failures is diffused responsibility. No single person made the decision. The system did—based on rules, models, and data pipelines built by multiple teams over time.

When users interact with AI through natural language, the system feels human. That perception increases trust, sometimes beyond what the system actually deserves. Users disclose more information. They question decisions less. Attackers understand this dynamic and exploit it.

Also Read: Hunters in the dark: AI agents and the cybersecurity trade-off

Accountability in machine-led decision

AI agents change how accountability works. In human workflows, responsibility is clearer. A person approves a payment. A manager signs off on access. With AI agents, decisions are distributed across models, prompts, APIs, and permissions.

When something goes wrong, teams often ask:

  • Was it a data issue?
  • A model behaviour?
  • A prompt design flaw?
  • Or a lack of human oversight?

From a cybersecurity perspective, this ambiguity is a risk. Systems that act autonomously require explicit accountability frameworks, not implicit trust in automation.

New risks introduced by chat interfaces

Conversational interfaces create security risks that traditional systems did not face. Natural language is flexible, ambiguous, and emotionally persuasive. This opens new attack surfaces:

  • Prompt manipulation that bypasses safeguards
  • Social engineering through AI-generated responses
  • Over-permissioned agents that can act across systems
  • Users mistaking confident language for correctness

Unlike classic software vulnerabilities, these risks are behavioural. They sit at the intersection of human psychology and system design.

Overconfidence in AI-driven systems

Founders and teams are often overconfident in AI systems because they appear intelligent. A system that explains its reasoning convincingly can mask uncertainty or error. This creates a false sense of security.

Overconfidence shows up when:

  • Human review is removed too early
  • Audit logs are minimal or absent
  • Edge cases are dismissed as rare
  • Security is assumed to be “handled by the model”

In reality, AI systems amplify existing risks if governance does not evolve alongside capability.

Also Read: Trust by design: Why cybersecurity is the new economic backbone

Different sectors, different expectations of safety

Expectations of safety vary widely across sectors. In fintech or health, users expect rigorous controls and clear accountability. In media or productivity tools, the tolerance for error is higher until trust is broken.

AI agents blur these boundaries. A general-purpose chatbot used in a low-risk context today may be embedded in a high-risk workflow tomorrow. Security assumptions must travel with the agent, not the use case.

Rethinking responsibility and risk

The key shift is not technical; it is conceptual. Teams must move from asking “Is the system secure?” to “Who is responsible when the system acts?”

This means :

  • Designing AI agents with least-privilege access
  • Keeping humans in the loop for high-impact decisions
  • Logging not just actions, but reasoning paths
  • Stress-testing systems for misuse, not just failure
  • Training teams to question AI output, not defer to it

Security becomes a shared discipline across product, engineering, and leadership—not a downstream checklist.

One lesson for building teams with AI today

The most important lesson is simple: do not outsource trust to machines.

AI agents can act, decide, and communicate at scale—but accountability remains human. Teams that build secure, trusted AI systems are not those with the most advanced models, but those that design for scepticism, transparency, and responsibility from the start.

As AI agents continue to take action on our behalf, cybersecurity will be defined less by firewalls and more by how well we understand and govern the relationship between humans and machines.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on Instagram, Facebook, X, and LinkedIn to stay connected.

The post The new cybersecurity battlefield: Protecting trust in the age of AI agents appeared first on e27.

Posted on Leave a comment

The SME finance reset: 3 steps to fix what’s breaking your growth

Many SMEs in Southeast Asia only realise their finance setup is no longer working when growth accelerates. Processes that once felt manageable begin to strain as transaction volumes increase, additional stakeholders enter, and reporting requirements tighten. 

In reality, clearer workflows and stronger process visibility enable problems to be identified much earlier, before delays, errors, and cash-flow blind spots become structural.

The issue is not poor execution. More often, finance systems in SMEs are designed for stability rather than growth.

SMEs power Southeast Asia, but finance is still treated as an afterthought

Small and medium enterprises dominate Southeast Asia’s business landscape. Across the region, SMEs account for roughly 97 per cent of businesses and contribute over 40 per cent of GDP.

Despite their scale, many SMEs still rely on manual, disconnected processes for core finance tasks. Research by the Economic Research Institute for ASEAN and East Asia (ERIA) highlights persistent barriers to digital adoption in developing Asian markets. These include limited business knowledge, gaps in ICT skills, and a lack of localised support.

We see the same pattern in mature markets like Australia. Even with great tech available, an October 2025 OFX report found that 80 per cent of Australian SMEs still rely on manual processes to reconcile expenses. In fact, nearly 38 per cent of business owners report that simple manual data-entry errors are their biggest daily headache. 

It’s a classic case of ‘if it isn’t broken, don’t fix it’ until the manual workload finally becomes too heavy to manage.

Finance issues follow predictable patterns as businesses scale

As SMEs grow, financial complexity increases faster than many teams expect. Invoice volumes rise. Transactions multiply. More people touch the process. Customers and suppliers operate across borders. Regulatory and reporting requirements tighten.

When finance processes are not redesigned for higher volumes, familiar issues begin to surface:

  • Invoices are sent late or tracked inconsistently
  • Approvals are concentrated with one individual
  • Reconciliation is rushed at month-end
  • Cash flow visibility becomes limited

These challenges are not surprising. They are the natural outcome of processes that were never redesigned as volumes increased. 

Consulting and software surveys repeatedly point to the same outcome. Weak invoicing and reconciliation processes that depend on spreadsheets or email lead to delayed payments, write-offs, and significant time spent chasing basic financial information.

Also Read: Security, trust, and the future of finance in an AI-driven world

Automation is about reducing friction, not adding tools

Automation is often misunderstood as a large-scale system change or a heavy transformation, when in reality it is primarily about reducing operational risk and manual friction.

For most SMEs, progress starts much smaller.

OECD research on SME digitalisation shows that smaller firms adopt digital tools more slowly than larger organisations, even though the efficiency gains are often proportionally greater. The challenge is rarely technology alone. It is deciding where to start and what to simplify.

In practice, effective automation focuses on removing repetitive friction:

  • Standardised invoice workflows
  • Automated reminders instead of time-consuming follow-ups
  • Approval steps that do not depend on one person
  • Fewer instances of entering the same data multiple times

The priority is reliability first. Speed and sophistication follow naturally once the basics are stable.

A practical three-step reset for SME finance

For SMEs looking to improve finance operations, a phased approach is often the most effective.

  • Step 1: Make the workflows visible

Document how invoicing, payments, expenses, and compliance actually work today. Simple process mapping often reveals duplication, unclear ownership, and hidden bottlenecks.

  • Step 2: Fix the biggest point of friction

Focus on one or two problem areas, such as unpaid invoices, approval delays, or reconciliation backlogs. Small, targeted improvements here often deliver immediate operational and cash flow benefits.

  • Step 3: Connect workflows over time

Gradually link invoicing, payments, reconciliation, and reporting so information flows with fewer handoffs. This is where finance shifts from record-keeping to decision support. Research by McKinsey has shown that connected finance workflows can significantly shorten close cycles, in some cases from weeks to days.

Also Read: Why perfect carbon audits could cripple climate finance — and what to fix instead

What consistently works across SMEs

Across SMEs in Singapore and the wider region, several patterns are consistent:

  • Small, focused improvements outperform large system overhauls
  • Early clean-up reduces operational and compliance risk
  • Clear records make audits, fundraising, and reporting easier

When finance operations are stable and predictable, less time is spent fixing errors and more time is available for planning and execution.

The payoff of clear finance processes

Finance rarely becomes a problem overnight. It becomes one gradually, as systems fail to keep pace with growth.

Effective finance operations do not need to be complex. They need to provide dependable visibility. Knowing who owes the business money, what needs to be paid, and where cash stands makes day-to-day operations calmer and month-end faster.

As Southeast Asia’s SMEs continue to expand across borders and operate in increasingly regulated environments, financial maturity will become a competitive advantage rather than a compliance requirement. Clear, connected finance processes provide the operational foundation for sustainable growth and long-term competitiveness.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

The post The SME finance reset: 3 steps to fix what’s breaking your growth appeared first on e27.

Posted on Leave a comment

Building an inclusive AI economy starts with access to deployment tools

Artificial intelligence is rapidly becoming the operating layer of the digital economy. Businesses are using AI to automate customer support, improve marketing outreach, and analyse large volumes of data in real time. According to McKinsey, 88 per cent of organisations now use AI in at least one business function, a significant increase from just a few years ago.

Customer engagement is one of the areas changing the fastest. Gartner predicts that conversational AI agents could automate up to 70 per cent of customer interactions by 2027, fundamentally reshaping how companies interact with customers.

Across Asia, this shift is already underway. In Singapore, companies are increasingly using AI across marketing analytics, sales automation, and customer engagement as they look to manage growing volumes of digital interactions. However, as AI becomes embedded in everyday business operations, an important question is emerging. Who actually gets to participate in this AI-powered economy?

The answer will depend not only on access to data or talent, but also on something far less visible. It will depend on the infrastructure that allows businesses to deploy AI systems reliably at scale.

If access to that infrastructure remains limited to large technology companies, the AI revolution could reinforce existing inequalities in the digital economy. But if the tools to deploy AI become easier to access, a much broader range of organisations will be able to build and benefit from AI-powered services.

The infrastructure gap in AI adoption

Much of the global conversation around AI focuses on breakthroughs in large language models. However, turning those models into real-world applications requires far more than simply connecting to an API.

Real-time AI systems often require multiple technologies working together simultaneously. These include speech recognition, natural language processing, text-to-speech synthesis, and networking infrastructure capable of delivering responses instantly.

For many organisations, especially smaller companies and startups, integrating these systems presents a major technical challenge. A Gartner survey found that 85 per cent of customer service leaders plan to explore or pilot conversational AI, yet many organisations still struggle to move from experimentation to full deployment.

One reason is that real-time interactions place strict demands on infrastructure. Even small delays can make AI conversations feel unnatural. Systems must process speech, interpret intent, generate responses, and deliver audio output within milliseconds.

Technology platforms are beginning to address this complexity by combining these components into integrated systems. For example, communications technology provider Agora recently introduced a conversational AI agent solution that integrates speech recognition, large language models, and text-to-speech technologies within a single orchestration layer designed for real-time conversations. 

The platform also relies on a globally distributed real-time network designed to maintain low latency and stable communication across different network conditions. Infrastructure like this aims to remove some of the production challenges that have historically limited voice AI deployment.

Other companies, such as Google Cloud and Amazon Web Services, provide APIs and cloud services that allow developers to embed messaging, voice communication, and AI capabilities into applications without building the entire infrastructure stack themselves. By simplifying these technical requirements, such platforms may help more organisations experiment with and deploy conversational AI.

Also Read: Your biggest competitor might be the AI answer itself

Voice AI and the next interface of digital services

Voice-based AI agents are emerging as one of the most transformative applications of artificial intelligence.

Customer service, sales outreach, and digital support channels are increasingly powered by conversational interfaces that allow users to interact naturally with businesses. Instead of navigating complex menus or typing long messages, users can speak directly with AI systems capable of understanding requests and responding in real time.

This shift is already visible across multiple industries.

Banks are already deploying AI-driven systems to manage financial services and transactions. In Singapore, DBS Bank recently partnered with Visa to pilot Visa Intelligent Commerce, a platform designed to enable secure, agent-initiated payments where AI agents can make purchases or transactions on behalf of consumers with consent and authentication safeguards.

Singapore Airlines recently partnered with Salesforce to introduce AI agents that assist customer service teams by summarising customer interactions and recommending responses in real time, helping staff respond more efficiently during booking inquiries or travel disruptions. E-commerce companies are also exploring conversational and voice-based interfaces to support product discovery, customer support, and post-purchase assistance.

Voice interfaces also offer important accessibility benefits. Speaking is often more intuitive than navigating complex applications, particularly for users who are less comfortable with digital interfaces. However, building voice-based AI systems that feel natural requires extremely reliable infrastructure. Conversations must occur instantly without noticeable delays. Systems must maintain accuracy even in noisy environments or unstable network conditions.

These technical requirements have historically limited large-scale deployment. Platforms that provide real-time communication networks and integrated AI orchestration are attempting to change that by making voice AI easier to deploy across industries.

The future of work in an AI-driven economy

The growth of conversational AI also raises important questions about the future of work.

AI systems are increasingly capable of handling routine customer interactions such as appointment reminders, billing inquiries, and product information requests. Automating these tasks can help organisations manage growing service volumes while improving response times.

However, automation does not necessarily mean replacing human workers.

Also Read: What is zero-click AI visibility? Impact on digital strategy & conversions

Research has shown that AI assistance can significantly improve productivity for employees, particularly when AI helps workers resolve issues more quickly or provides real-time guidance. In customer service environments, AI agents can handle repetitive inquiries while human agents focus on complex issues that require empathy, judgment, or negotiation.

In sales environments, AI tools can assist with lead qualification and outreach while sales teams focus on building relationships and closing deals. Ensuring that workers benefit from this transition through training and new opportunities will be essential to building a more inclusive digital economy.

Equity requires accessible infrastructure

As artificial intelligence becomes embedded in nearly every digital experience, conversations about equity in the digital economy must extend beyond funding and talent.

Infrastructure plays a critical role.

Who has access to the platforms that make AI usable in real-world applications?

Who can deploy AI-powered services quickly and affordably?

And who is excluded when the barriers to adoption remain too high?

The next phase of digital innovation will not be defined only by breakthroughs in AI models. It will also be shaped by the infrastructure that allows businesses of all sizes to turn those models into real products and services.

If these tools remain accessible, the AI era could unlock opportunities across industries and markets. But if access to AI deployment infrastructure becomes concentrated among a few dominant players, the gap between digital leaders and everyone else may continue to widen.

Building equity into the digital economy ultimately means ensuring that the power of AI is not reserved for a select few but is available to the many organisations and innovators shaping the future of technology.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

The post Building an inclusive AI economy starts with access to deployment tools appeared first on e27.

Posted on Leave a comment

Echelon Philippines 2025 – Holistic development of the venture ecosystem in the Philippines: From early fragmentation to cohesive growth

At Echelon Philippines 2025, a panel moderated by Twwo Jaruthassanakul of Seedstars explored the evolution of the country’s venture ecosystem.

Speakers Joan Yao of Kickstart Ventures, Joseph de Leon of Manila Angel Investors Network, and Paulo Campos III of Kaya Founders reflected on how the once-fragmented startup landscape has matured into a more cohesive and dynamic ecosystem. Growth has been fueled by diverse talent, including the “sea turtle” phenomenon—Filipinos returning home after studying or building companies abroad.

Despite the regional funding winter, the Philippines has maintained steady investment activity, supported by a vibrant and increasingly organized community of angel investors helping nurture early-stage startups.

The post Echelon Philippines 2025 – Holistic development of the venture ecosystem in the Philippines: From early fragmentation to cohesive growth appeared first on e27.

Posted on Leave a comment

Ant International: FinAI paving the last mile for agentic commerce

Ant International’s Jiang-Ming Yang explains how FinAI enables secure agentic commerce, helping businesses manage global payments, AI-driven transactions, and cross-border growth.

FinAI has become the essential backbone to enable secure agentic commerce at scale as AI drives change across every part of the economy, said Jiang-Ming Yang, Chief Innovation Officer of Ant International.

Global shifts in commerce and payments

  • The speed of consumer AI adoption has outpaced almost any other technology in history, with analysts forecasting AI-facilitated spend to reach nearly US$8 trillion by 2030 — nearly a quarter of all online sales.
  • Consumers are embracing new payment methods. Digital wallets and other alternative payment methods (APMs) as well as open banking continue to grow in popularity, especially in emerging markets. Juniper Research expects the number of digital wallet users to grow to over 6 billion by 2030, covering over three quarters of the global population.
  • Emerging markets are driving global growth, but merchants there face increasing foreign exchange volatility and high barriers to doing business on international e-commerce platforms.

Also read: Why WorldFirst’s latest move could change how digital platforms scale worldwide

FinAI paving the last mile for next-gen commerce

FinAI will be key in helping merchants navigate global payments systems and adapt to AI-driven commerce, Yang said. Payment firms will become one-stop FinAIaaS partners enabling businesses to engage customers more efficiently, immersively and securely.

According to Yang, Ant International provides five types of critical FinAI capabilities:

  • One seamless checkout for cross-channel payments (card, digital wallets, and open banking),
  • One agent partner to resolve global payment complexity,
  • Customisable solutions for agentic payments and commerce,
  • Embedded payments for extra value-added, and
  • AI-powered payment security foundation.

Agentic fintech to businesses of all sizes

With AI, technology and operating know-how can be distilled into a single agent, enabling businesses to conduct end-to-end operations from onboarding to optimising payment success rates through one partner. Solutions such as Antom Copilot, which can reduce merchant payment integration time by up to 90% and improve dispute-handling efficiency by 46%, vastly expand access to growth opportunities.

“In the past, only large enterprises had the luxury of hiring large teams to handle the complexities of dealing with global expansion and different payment methods,” said Yang. “Now, AI can change the way we operate by giving businesses access to a single agent partner that is available 24/7.”

Ant International is already working with major players to support agentic commerce growth, collaborating with Google on its Agent Payments Protocol (AP2) and Universal Commerce Protocol (UCP) standards, which guide how agents can operate across the entire shopping journey.

Also read: Eyes on the prize as biometrics reshape everyday payments

Trust as the foundation for growth

Alongside growth potential, AI also brings new challenges to merchants and consumers. Deepfakes, for example, have become a persistent problem. Ant International has developed an advanced anti-deepfake solution, which demonstrate detection rates of over 99%. Yang also highlighted the company’s SHIELD 3-in-1 Transformer model, which is able to identify high-risk transactions with over 95% precision, as key to providing a single trust layer for AI-driven payment security.

“AI-powered threats are no longer just theoretical, they are a reality that we face today. As technologies evolve, one thing does not change – trust will always be the foundation of payments, and will continue to be at the core of our FinAI development journey,” Yang added. He made the remarks in a case study address at The Economist’s Technology for Change conference in March 2026.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

This is a sponsored article produced by the e27 team.

Featured Image Credit: Ant International

The post Ant International: FinAI paving the last mile for agentic commerce appeared first on e27.

Posted on Leave a comment

Mozark raises US$40M to test how apps really behave in the wild

Singapore-based Mozark, which helps organisations check how their digital services actually perform for real users in the real world, has raised US$40 million in Series B funding.

The round was led by IFC (World Bank Group) and RMB Capitalworks, with Kalaari Capital also participating.

Mozark sells what is essentially a reality check for digital products. Its platform runs scripted user journeys on real devices, across real networks and locations, then turns the resulting telemetry into a diagnosis of where performance breaks — from the app layer down to network infrastructure.

Also Read: Transformation tenet: The digital customer experience is key to “stickiness”

The company says it now works with more than 50 enterprise and government customers across 20 countries and has executed more than 25 million tests on several thousand live devices.

Why this matters for Southeast Asia

For Southeast Asia, the significance of the round goes beyond another sizable cheque in Singapore. It reflects a broader shift in the type of digital infrastructure attracting investment.

The region’s digital economy has grown rapidly, but reliability remains highly variable outside premium urban corridors. When apps slow down or fail, the consequences extend beyond user frustration. They can mean missed payments, failed logins, dropped telehealth calls, or unreliable access to government services.

Mozark’s proposition is straightforward: measure digital performance as experienced by real users, not as reported by dashboards inside a cloud region. In markets where regulators, telcos, and critical service providers need evidence of service quality across diverse geographies, that distinction matters.

It also helps explain why IFC’s involvement is notable. For development-focused investors, tools that measure digital reliability are increasingly viewed as part of the economic plumbing of emerging markets, rather than simply another DevOps layer.

Where the new capital will go

Mozark plans to use the fresh capital to accelerate expansion beyond Southeast Asia and deepen its technical capabilities.

The company says the funding will support:

  • Expansion into priority markets, including the United States and the Global South
  • Strategic acquisitions
  • Deeper testing and measurement across what it calls the “AI-native stack”, spanning applications, networks, and AI infrastructure
  • Development of agent-to-agent communication testing, designed for systems where AI agents — not just humans — exchange requests and execute tasks

According to founders Kartik Raja and Fabien Renaudineau, the need for real-world testing is growing as digital services become more complex.

“AI is accelerating digital services everywhere, but experience quality remains disparate and unreliable,” they said in a joint statement.

Mozark’s Chief Product Officer Chandra Ramamoorthy points to the underlying constraint: traditional testing approaches still depend heavily on controlled environments. “Testing remains constrained by physical infrastructure limitations,” he said, positioning Mozark’s real-device approach as a way to validate performance at scale under real-world conditions, rather than relying solely on lab simulations.

A market shifting from monitoring to proof

Digital experience monitoring has long relied on dashboards and synthetic checks running from data centres. But in the Asia Pacific, the core challenge is increasing variability.

Users frequently switch between Wi-Fi and mobile networks, rely on mid-range Android devices, and access services that traverse a complex chain of CDNs, telco routing, cloud regions, and third-party APIs.

This complexity is pushing enterprises toward tools that can answer more practical questions:

  • How does this app behave on a specific handset model in a second-tier city?
  • Is latency caused by the app itself, the CDN, the ISP route, or local congestion?
  • Can regulators or enterprises independently verify performance claims?

In other words, the market is shifting from monitoring systems to providing user experience.

A growing but still fragmented market

Public analyst breakdowns typically group “digital experience monitoring” within broader application performance monitoring (APM) and observability markets.

Also Read: How Southeast Asian brands are reimagining the future of digital experiences

By those measures, Southeast Asia remains a relatively small slice of Asia Pacific spending, though growth is accelerating as banks, telcos, superapps, and governments digitise more workflows.

In practical terms, that places the regional opportunity in the hundreds of millions of US dollars annually, with further upside as AI-driven services increase the cost of outages or degraded experiences.

Mozark is positioning itself in the gap between traditional application monitoring, which often assumes stable infrastructure, and network measurement tools, which rarely capture full application journeys.

This positioning may prove particularly relevant in markets where sovereignty-ready deployments and independent verification are becoming increasingly important.

A competitive global arena

Mozark is entering a competitive field populated by well-funded incumbents and specialised measurement platforms.

Key players include:

  • Dynatrace, Datadog, New Relic, and AppDynamics (broad observability and APM platforms)
  • Catchpoint and ThousandEyes (Cisco) (internet and network experience monitoring)
  • Akamai and Cloudflare (performance infrastructure with measurement capabilities)
  • Network and mobile performance specialists such as Ookla and Opensignal

Mozark’s differentiation lies in combining real-device, real-network testing across multiple geographies with an emphasis on independent measurement and deployments designed to meet regulatory and data sovereignty requirements.

A rare Southeast Asian contender

Within Southeast Asia, many vendors provide QA testing or performance monitoring. But few homegrown platforms focus on large-scale, real-world device telemetry across multiple countries, serving both enterprises and regulators.

As a result, Mozark often finds itself competing with global platforms or with in-house monitoring solutions that struggle as systems become more AI-driven and interconnected.

The new funding gives Mozark the runway to prove that its model can scale globally.

The company’s broader bet is that digital experience will soon need to be measured as rigorously as uptime, not simply marketed.

The post Mozark raises US$40M to test how apps really behave in the wild appeared first on e27.

Posted on Leave a comment

Bitcoin and Ethereum rally while S&P 500 plummets: Is crypto finally decoupling from traditional markets?

The cryptocurrency market advanced 2.15 per cent to reach a total capitalisation of US$2.44T on March 13, 2026. This gain stands out because it occurred while traditional risk assets faced severe pressure. Equities and bonds sold off sharply as Brent crude oil surged above US$100 per barrel for the first time since 2022. Escalating Middle East tensions and a critical blockage in the Strait of Hormuz triggered the move.

The crypto market’s weak correlation with the S&P 500 at -14 per cent and with Gold at -34 per cent signals a crypto-specific catalyst rather than broad risk-on sentiment. This divergence suggests digital assets are beginning to trade on their own fundamental narratives. Such independence represents a maturation I have long argued is essential for the asset class to evolve beyond a speculative adjunct to traditional finance.

The primary engine behind this rally is BlackRock’s launch of its iShares Staked Ethereum Trust, ticker ETHB, which debuted on Nasdaq on March 12. The product generated US$15.5M in first-day volume, a solid start for a novel instrument. This ETF allows investors to gain exposure to Ethereum’s price while simultaneously earning staking rewards. The design treats ETH as a productive, yield-bearing asset. This marks a profound shift.

For years, institutional adoption focused on Bitcoin as digital gold, a store of value. BlackRock’s move validates Ethereum’s utility as a foundational technology capable of generating cash-flow-like returns. By locking up ETH supply through staking, the product mechanically reduces sell-side pressure. This creates a favourable supply-demand dynamic. The critical metric to watch now is weekly ETF flow data. Sustained inflows would confirm that institutions are not just testing the water but are committing capital to this new yield-bearing crypto thesis.

Supporting this institutional momentum is a wave of regulatory optimism. Social media channels buzzed with reports that President Trump had confirmed a zero per cent tax on crypto transactions. Additional chatter highlighted the US Senate advancing measures to block a Central Bank Digital Currency until 2030. While these developments require official verification, the market is clearly pricing in a more accommodating policy environment. This narrative has fuelled a healthy rotation of capital into altcoins. The Layer 1 sector advanced 1.58 per cent.

Artificial intelligence tokens like Render surged over 11 per cent. Bitcoin dominance held steady at 58.78 per cent. This indicates that new money is flowing into the broader ecosystem rather than just fleeing to the largest asset. Such breadth is a positive sign for market health. It suggests investors are gaining conviction in specific technological narratives like decentralised compute and scalable infrastructure.

Also Read: Why crypto surged while stocks fell: The regulatory breakthrough changing everything

From a technical perspective, the market cap is now testing a pivotal level at US$2.44T. Immediate resistance sits at the recent swing high of US$2.46T. A clean break above this level could open a path toward the US$2.52T extension. Caution is warranted because the seven-day Relative Strength Index reads 74.39. This indicates overbought conditions in the short term.

The rally may need to consolidate before its next leg higher. The key support level to monitor is US$2.33T. A break below this floor would signal a loss of momentum and could trigger a deeper pullback. The next major catalyst will be the upcoming US ETF flow reports. Positive data could provide the fuel needed to overcome resistance. Disappointing flows might exacerbate a technical correction.

This crypto-specific rally gains additional significance when viewed against the backdrop of traditional market turmoil. On March 12, US indices posted broad declines. The Dow Jones Industrial Average fell 739.42 points, or 1.56 per cent, to close at 46,677.85. The S&P 500 dropped 103.22 points, or 1.52 per cent, to 6,672.58. This marked its lowest close since November. The Nasdaq Composite slipped 404.15 points, or 1.78 per cent, to 22,311.98 as technology stocks grappled with rising yields. The VIX volatility index settled at 24.23, reflecting elevated fear. The trigger for this selloff was the energy crisis. Brent crude surged over nine per cent to settle at US$100.20 per barrel.

The International Energy Agency warned of the largest oil supply disruption in history. This shock has forced traders to scrap expectations for Federal Reserve rate cuts in 2026. Soaring energy costs threaten to reignite inflation. Consequently, US Treasury yields are climbing. The 2-year yield jumped 11 basis points. The 10-year yield hit 4.27 per cent. Stress is also emerging in the US$1.8T private credit market. Funds like Morgan Stanley and Cliffwater LLC have capped withdrawals following a surge in redemption requests.

In this environment, crypto’s decoupling is not just a market curiosity. It represents a potential shift in how digital assets function within a diversified portfolio. My view has consistently been that crypto’s long-term value proposition hinges on its ability to offer uncorrelated returns driven by its own adoption cycles and technological progress. The current action supports that thesis.

The rally is fuelled by a structural product innovation from the world’s largest asset manager and a favourable regulatory narrative. It is not driven by a surge in liquidity from traditional markets. This is a more sustainable foundation for growth. Sustainability remains the key question. Can the crypto market maintain its upward trajectory if ETF inflows decelerate this week or if the macro backdrop worsens? The overbought RSI suggests a pause is likely. The underlying drivers remain intact.

Also Read: Crypto market surges to US$2.38T as Middle East tensions ease: What comes next

The path forward hinges on a few clear factors. First, institutional demand for the new staked Ethereum ETF must prove durable. Second, the regulatory narrative needs to translate into concrete policy actions to maintain confidence. Third, the market must successfully digest its overbought condition without breaking below the US$2.33T support. A failure on any of these fronts could lead to crypto re-correlating with traditional risk assets. Those assets are currently under severe strain from inflation fears and geopolitical instability. For now, the momentum is bullish, and the drivers are specific to the crypto ecosystem. This is a sign of maturation.

The market is beginning to trade on its own merits. This development aligns with the vision of a decentralised financial system operating in parallel with, and sometimes independently of, the legacy system. The coming days, with their focus on ETF flows and key technical levels, will provide crucial evidence on whether this independence can be sustained amid a global macro storm. Investors should watch the US$2.46T resistance and US$2.33T support as decisive boundaries.

A break above US$2.46T could accelerate gains toward US$2.52T. A drop below US$2.33T would signal a loss of momentum and invite a deeper correction. The US$15.5M debut volume for ETHB offers an initial benchmark, but sustained weekly flows will determine if institutional appetite remains strong.

With Bitcoin dominance at 58.78 per cent, the market retains room for altcoin expansion if the regulatory tailwinds persist. The 7-day RSI at 74.39 warns of short-term exhaustion, so patience may reward those waiting for a healthier entry point. In a world where Brent crude trades above US$100 per barrel and the 10-year yield touches 4.27 per cent, crypto’s ability to post gains on its own terms signals a new phase of market evolution. This phase demands careful monitoring of ETF data, technical levels, and policy developments. The US$2.44T market cap represents both opportunity and risk. Navigating this landscape requires discipline, clarity, and a focus on the structural forces shaping the next chapter of digital finance.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

The post Bitcoin and Ethereum rally while S&P 500 plummets: Is crypto finally decoupling from traditional markets? appeared first on e27.