Posted on Leave a comment

The hidden cost of AI coding: Why proof will matter more than prompts

AI coding tools turned software output into a speed story. A developer can sketch a product in the morning and push a working build before dinner. That is why vibe coding spread so fast, even as security researchers warned that AI-generated code can widen software supply chain risk.

The part most people missed sits behind the prompt box. In many AI coding stacks, code, prompts, and usage data can pass through outside platforms, cloud infrastructure, or model-provider systems. 

For a startup hacking on a landing page, that may feel tolerable. For a bank, fintech, or fund, it can open a path to IP loss, audit trouble, and valuation damage.

An alarm bell from his own workflow

The concern started from a personal place. I had been using AI coding tools on quant trading systems, then realised the privacy settings behind those tools deserved a much closer look. This is my life’s work. How am I supposed to feel about this?

One example of that concern reflected in policy is Cursor’s data-use page. It says that if Privacy Mode is turned off, it may use and store codebase data, prompts, editor actions, code snippets, and other code data to improve features and train models. Requests still pass through its backend, even when a user brings an API key of their own.

The rules also change depending on which product is in the chain. OpenAI states it doesn’t train on business data by default, and Anthropic shares the same for its commercial products. Consumer products and third-party access follow separate terms, which leaves enterprises sorting through a patchwork of settings, vendors, and responsibilities.

Also Read: Can you build an app without coding? My experiment might surprise you

Why this hits finance harder

A code leak is not just a developer problem in regulated sectors. A financial codebase can hold client identifiers, internal controls, pricing logic, fraud rules, risk models, and trading strategies. Put differently, source code carries business logic, internal workflows, architecture decisions, and years of engineering work. Once it leaves a company’s control, the damage can spill into customer trust, due diligence, compliance, and enterprise value.

90 per cent of security professionals say insider attacks are as hard as or harder than external ones to detect; 72 per cent of organisations still cannot see how users interact with sensitive data across endpoints, cloud apps, and GenAI platforms.

And that pressure is meeting a tougher legal climate. 2025 marked the move from AI hype to AI accountability, with regulators in the U.S. and EU shifting toward enforcement and compliance deadlines. In Europe, the Digital Operational Resilience Act makes clear that financial entities remain fully responsible for their obligations, including when ICT services are outsourced.

Visibility is also getting worse as AI systems touch more of the workflow. Only 21 per cent of organisations maintain a fully up-to-date inventory of agents, tools, and connections, leaving 79 per cent operating with blind spots. Nearly 40 per cent of enterprise AI interactions now involve sensitive data, including copied text, pasted content, and file uploads.

What’s the pitch to non-technical executives

Let’s frame the risk in business terms. Using AI means that you are sending data to whoever is providing the model, or the platform, and potentially also whoever is providing the infrastructure. 

The big question for executives, in my view, is whether they are comfortable with that chain seeing, storing, or learning from their most valuable data.

Also Read: From chatbots to vibe-coding: 3 AI experiments that changed my investment strategy

The answer changes fast here. Financial and regulated firms can’t afford the ‘move fast and break things’ approach that many AI tools implicitly encourage. More often now, regulators, buyers, and internal security teams want a clear record of where data went, who touched it, and what evidence exists afterwards.

The next premium in AI: Controlled execution?

The market has already rewarded speed. The next premium may go to platforms that keep the speed and clear security review, also giving compliance teams evidence they can stand behind. That is a finance story as much as a tech one, because budgets, contracts, due diligence, and enterprise value tend to follow tools that reduce uncertainty instead of adding another black box.

AI can clearly write code. But where does that code travel? Who can inspect the path? What proof is left behind when the work is done? Those are the sharper questions for boards, CFOs, CISOs, and investors.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The hidden cost of AI coding: Why proof will matter more than prompts appeared first on e27.

Posted on Leave a comment

Cybersecurity strategies for startups on a budget

Digital evolution worldwide has been rapid over the past few decades. Startups are increasingly transitioning from local to regional and even global market presence, underscoring the opportunities that digitisation at scale has brought. This development has made cybersecurity a key pillar of effective business governance in the modern age. 

Today, having a robust cybersecurity ecosystem ensures that startups preserve stakeholder trust. Thankfully, small businesses no longer require a large capital investment to build a defensible and modern security posture. With a strategic approach, high impact and low cost can exist simultaneously.

Shifting from a reactive mindset to a proactive one

The cybersecurity industry has largely shifted from focusing solely on prevention toward building resilient and proactive models. Emphasis on detection and recovery has become an important measure of a business’s longevity. Adopting this philosophy is critical for emerging businesses, which are often viewed as particularly vulnerable. 

For startups, effectively protecting sensitive customer information can be the difference between long-term growth and reputational damage. Entrepreneurs who focus on security early on find it easier to navigate regulatory requirements and secure partnerships with large organizations. Fortunately, building resilient systems is more about continuous education and operational improvement than it is about heavy capital expenditure. 

Strengthening identity and access control

Identity is the primary focus area for building modern cybersecurity systems. With more businesses migrating to cloud-based ecosystems, user account management is now the most important line of defense. Implementing Multi-Factor Authentication (MFA) is the most effective and low-cost approach available. By adopting a second form of verification, organizations can prevent approximately 99 per cent of account takeover attacks. 

Having centralized password management is also essential. Making employees remember complex passwords leaves room for reuse across personal and professional platforms. Tools such as Bitwarden or Keeper help startups create original and complex passwords, preventing mix-ups. This ensures that a breach on third-party platforms does not lead to deeper internal entry. Such subscriptions are inexpensive for the depth of protection they provide. 

Also Read: AI vs AI: Inside Southeast Asia’s new cybersecurity war

Managing the hybrid work perimeter

Flexible work arrangements, such as hybrid or fully remote models, are becoming a defining feature across industries worldwide. Many startups leverage remote talent to stay competitive, but this decentralised model introduces risks as employees access data from unsecured home networks. In fact, of those working from home, 25 per cent of employees are unaware of their device’s security protocols. Startups must rethink data protection outside the office.

Organisations should implement Virtual Private Networks (VPNs) and cloud-based security layers to protect data outside the office. As cyberresilience becomes a higher priority in remote work environments, defining clear remote work policies and educating employees about the risks of unsecured public Wi-Fi are critical, and fortunately, low-cost.

Continuous digital hygiene and automated patching

In 2026, the speed of digital attacks increased, often aided by automated tools that scan for known vulnerabilities. Keeping all software and applications up to date is a nonnegotiable task. Many regional incidents occur because a business delayed a critical update to avoid a minor disruption, only to leave a vulnerability exposed to opportunistic attackers.

Also Read: How cybersecurity companies can build trust through digital PR

Automated patch management is a cost-effective way to mitigate disruptions caused by outdated software. Most modern platforms offer auto-update features that require minimal configuration. For startups managing cloud infrastructure, using managed services that handle security updates can offload significant technical risk. Maintaining a high standard of digital hygiene ensures the company is not “low-hanging fruit” for the scripts and ransomware variants currently affecting small and medium-sized enterprises.

Leveraging frameworks and local compliance

Founders do not need to build security policies from scratch. Numerous free frameworks provide a roadmap for improving security. The NIST Cybersecurity Framework is a globally respected standard, but regional alternatives provide specific guidance. For example, business owners and IT teams in Singapore should seek the government-created Cyber Essentials Mark to align with region-specific standards. 

Also Read: Code, power, and chaos: The geopolitics of cybersecurity

Adhering to these frameworks also helps with data sovereignty. As countries across Southeast Asia strengthen data governance and protection practices, businesses in the region must demonstrate a baseline of security to remain compliant and avoid fines. Compliance is also a competitive advantage — it signals to enterprise clients and investors that the startup is a mature, responsible partner.

The 3-2-1 backup and recovery strategy

No security system is impenetrable, making a robust backup strategy the ultimate safety net. The “3-2-1” rule remains the industry standard — at least three copies of data on two different media, with one copy kept off-site. This ensures that even during a ransomware attack or hardware failure, the business can be restored without paying a ransom.

Regularly testing the recovery process is as important as the backup itself. Many organisations realise too late that their backups were corrupted or that recovery is too slow. Performing a “fire drill” once or twice a year ensures the team knows how to get the business back online within hours. Preparedness is often the difference between a minor incident and a terminal business failure.

Fostering a culture of security and resilience

Ultimately, technical tools are only as effective as the people using them. Building a culture where every team member feels responsible for security is the most cost-effective long-term strategy. By educating their employees about the key strategies and frameworks for modern cybersecurity, startups can ensure company-wide safety without costing a fortune.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Cybersecurity strategies for startups on a budget appeared first on e27.

Posted on Leave a comment

How SaaS companies are valued: Why the multiple is only the surface story?

One of the most persistent myths in tech is that SaaS valuation is a simple formula. Take Annual Run Rate (“ARR”), apply a market multiple, and you have your answer.

It is a useful shortcut. It is also how founders end up misunderstanding what their company is actually worth.

Yes, SaaS businesses are often discussed in terms of ARR multiples. But in real transactions, especially exits, the multiple is not the valuation logic. It is the output of it. What buyers are really valuing is the quality of the revenue, the durability of growth, the efficiency of the model, and the type of transaction being done.

That distinction matters because two SaaS companies with the same ARR can produce very different outcomes in the market.

The first point is straightforward: recurring revenue matters more than revenue in general. For most SaaS businesses, valuation is anchored on ARR, not total revenue. That is because recurring subscription revenue is the part that a buyer can actually underwrite with some confidence. It is predictable, repeatable, and, if the business is healthy, compounding.

By contrast, implementation fees, consulting income, or one-off project work may still be commercially useful, but they rarely deserve the same multiple. A company with US$10 million in total revenue, of which US$8 million is recurring, should not expect to be valued the same way as a company with US$10 million in which half the revenue comes from non-recurring services. The first looks like a software asset. The second may still be a good business, but it is not as clean a recurring one.

But even that is only the starting point.

What really separates SaaS businesses in valuation is not just the amount of ARR, but the quality of that ARR. And the clearest signal of quality is retention.

This is where many founders become overly optimistic. They see recurring billing and assume the market will view their revenue as durable. Buyers do not think that way. They look at churn first. If customers are leaving too quickly, the business is not truly compounding. It is just running hard to replace what is already falling out of the bottom.

Also Read: The autonomous agent paradigm: Meta’s Manus acquisition, MCP integration, and the disruption of SaaS

As a practical benchmark, SMB SaaS volume churn should generally not be more than three per cent per month. Enterprise SaaS should be far tighter, ideally with near-zero volume churn across core accounts. The exact number is not the whole point. The principle is: Retention is a proxy for stickiness, product relevance, and how deeply the software is embedded in customer workflows.

In plain English, buyers pay more for revenue that stays.

That also means an average-growth company with excellent retention can be worth more than a faster-growing business with weak customer durability. Founders often overemphasise growth and underappreciate the penalty the market places on churn. But a leaky SaaS business is not a premium SaaS business, no matter how strong the top-line story sounds in a deck.

Growth still matters, of course. A company growing more than 300 per cent year-on-year will usually attract more attention than one growing at 50 per cent. Faster growth often supports a higher multiple because it suggests a bigger future revenue base and a stronger competitive position.

But growth is not one thing. Buyers care about growth quality.

Was growth driven by healthy demand and repeatable customer acquisition, or by unsustainably high sales and marketing spend? Was it supported by strong expansion within existing accounts, or did it depend on heavy discounting just to win new logos? Is the growth durable, or did the company simply pull revenue forward?

These are not academic questions. They directly shape valuation. High growth with poor retention and weak economics is less impressive than founders like to think. High growth with strong retention and efficient acquisition is where the real premium sits.

This leads to another factor that founders consistently underestimate: margins and unit economics.

Software is attractive because it should scale. That does not mean every SaaS company automatically deserves a strong valuation. Buyers will still look closely at gross margins, customer acquisition cost, payback periods, and overall operating leverage. If the business needs too much spending to maintain growth, or if margins remain thin despite scale, the valuation logic weakens. A recurring revenue business with poor unit economics is not a great asset just because it invoices monthly.

Also Read: The agent swarm is unleashed on SaaS

So when people ask how SaaS companies are valued, the better answer is this: not by ARR alone, but by the quality of the machine producing that ARR.

That machine is judged across four big dimensions.

  • First, how much revenue is truly recurring.
  • Second, how sticky that revenue is.
  • Third, how durable and efficient the growth is.
  • Fourth, whether the economics prove the business can scale.

Only after that does the multiple make sense.

Where this becomes more interesting is when founders confuse fundraising valuation with exit valuation. The two are related, but they are not the same exercise.

In a VC fundraising round, the valuation often reflects future potential more than present-day operating quality. Investors may be willing to pay up because they believe the company could become a category winner, dominate a large market, or grow into a strategically important platform. The valuation is often shaped by what the company might become.

In an exit, especially in M&A, the lens is more grounded. Buyers are usually paying for what exists today, adjusted for what they believe they can realistically achieve after closing. That makes M&A valuation more closely linked to current performance, risk, and transaction logic.

Also Read: The rise of one-person AI companies and why micro-SaaS is at the centre of it

Put differently, fundraising tends to reward possibility. Exits tend to reward evidence.

This is why founders should be careful when using private fundraising rounds as reference points for what their company should be worth in a sale process. A VC may tolerate messy retention, thin margins, or heavy burn if the upside is large enough. An acquirer, particularly one writing a real cheque to buy control, will usually be more disciplined.

Even inside M&A, not all buyers think alike.

A strategic acquirer may pay more because your product fills a capability gap, gives them access to a highly relevant customer base, or creates synergies across product, distribution, or go-to-market. They are not only buying your standalone cash flow. They may also be buying what your company unlocks inside their broader machine.

A financial buyer, by contrast, is usually more disciplined on headline multiple. They will focus more tightly on retention, margins, cash flow profile, and whether the growth engine is efficient enough to support an investment case. That does not mean they always pay less. It means their logic is usually more rooted in the business as an asset, rather than in strategic overlap or synergy.

So the same SaaS company can produce very different valuations depending on whether the buyer is strategic or financial.

And then there is deal structure, which founders often ignore until it is too late.

A headline valuation is not the same as bankable value. If a buyer offers a rich number, but much of the consideration comes in shares rather than cash, the economics become much less certain. A share swap may look attractive on paper, especially if the acquirer is growing quickly or trades well publicly. But it also means the seller is taking future performance risk, liquidity risk, and market risk on the buyer.

An all-cash offer at a slightly lower headline valuation may, in practice, be worth more because the proceeds are real, immediate, and certain. The same logic applies to earn-outs, deferred payments, and other structured consideration. Founders should not just ask what the price is. They should ask what form the price takes, when it is paid, and what has to happen before it becomes real.

This is why transaction context matters so much. Market benchmarks can tell you where comparable businesses may sit. But actual outcomes depend on buyer fit, competitive tension, and structure. A strong strategic fit with multiple interested buyers can move valuation above generic benchmarks. A single-bid process with messy diligence and weak retention can drag it below them very quickly.

Also Read: I built an AI agent for myself — it became a 2,000-user micro-SaaS

The uncomfortable truth is that SaaS valuation is less about memorising what multiple the market is paying and more about understanding why one business deserves that multiple while another does not.

Founders who want to improve valuation should stop asking only, “What are SaaS companies trading at?” and start asking better questions.

  • How much of my revenue is truly recurring?
  • How strong is retention by segment and cohort?
  • Is our growth efficient, or just expensive?
  • Do our margins support the software story?
  • Would a buyer see this as a durable asset, or as a promising but risky one?
  • And if I do get an offer, how much of it is actually cash?

That is the real lens.

The market may speak in multiples. But deals are done on quality, confidence, and structure. Founders who understand that early will prepare differently and, usually, negotiate better.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post How SaaS companies are valued: Why the multiple is only the surface story? appeared first on e27.

Posted on Leave a comment

Air gapped open source and the secure but stale paradox

There is a familiar comfort in industrial environments that still keeps critical systems isolated from the outside world. The argument sounds sensible. If the plant is air gapped, exposure is lower. If exposure is lower, updates can wait. If updates can wait, stability wins. That logic has carried many sites for years, but it is becoming harder to defend as open source components sit deeper inside historians, engineering workstations, remote access stacks, vendor appliances, monitoring tools, and the layers around control.

In operational technology, it is clear that outages are often unacceptable, must be planned days or weeks in advance, software changes must be thoroughly tested, and deployed technology often remains in service for 10 to 15 years or longer. It is also important to note that OT frequently relies on older operating systems that may no longer be supported. That is the environment in which the paradox appears. The safest plant is not always the one that updates most often, but it is also not the one that quietly ages into unmanageable software risk.

That is why the phrase secure but stale matters. In plants, stale software is rarely the result of negligence alone. It is often the result of rational operational discipline. The trouble is that rational local decisions can create strategic drift. A component that was acceptable when commissioned can become difficult to patch, harder to support, and poorly understood by the people still operating it years later. This is not a niche problem. It is part of the structural difference between IT and OT.

The wrong objective is patch speed

Many security discussions still assume that the right answer is to push plants closer to enterprise patching cycles. That is usually the wrong lesson. In industrial settings, speed without operability becomes its own risk. Software updates in OT cannot always be implemented on a timely basis, need vendor and end user testing, and may require revalidation with control engineers, security teams, and IT working together. If leaders ignore that and set patch velocity as the headline metric, they will force either unsafe change or quiet non-compliance. Neither outcome is mature.

A better objective is controlled freshness. By that I mean something more realistic than always current and more responsible than indefinitely deferred. Controlled freshness means every open source component has a known origin, a known owner, a known operational purpose, and a known path to replacement or containment. That is a more serious standard for plants because it respects the reality of shutdown windows while refusing blind trust as a long-term operating model.

Also Read: How to navigate the investment opportunity in climate tech sector

Many software supply chain guidance points in exactly this direction. It treats SBOM, vendor risk assessment, open source controls, and vulnerability management as complementary capabilities, not substitutes, and it stresses that open source provenance, integrity, support, and maintenance are often not well understood or easy to discover.

Open source is not the problem, unmanaged open source is

There is no value in pretending plants can avoid open source. They already depend on it, often indirectly. The real issue is that many sites do not know precisely where it sits, which versions are deployed, or whether a vendor appliance that looks closed is in fact carrying a stack of ageing open components underneath.

Organisations should understand suppliers’ use of open source components, acquire those components through secure channels from trustworthy repositories, maintain sanctioned internal repositories, and use hardened internal repositories or sandboxes before introducing components into development environments. It also says that when no vendor-supplied SBOM exists, organisations should perform binary decomposition to generate SBOMs for legacy software where technically and legally feasible.

That changes the leadership question. The issue is no longer whether a plant uses open source. The issue is whether the organisation has operationally useful visibility into that open source estate. In practice, that means knowing which components matter enough to affect production, safety, recovery, vendor support, or incident response. Perfect visibility can wait. Actionable visibility cannot.

The real control layer is the offline intake model

Air-gapped environments need a better software intake discipline than most enterprises because they cannot rely on frequent corrections later. The strongest plants do not treat updates as downloads. They treat them as engineered releases.

The Secure Software Development Framework is helpful here because it is not written only for fast-moving cloud products. It recommends release integrity verification, including cryptographic hashes and code signing, and it says organisations should securely archive each software release together with integrity verification information and provenance data.

It also calls for provenance data to be maintained and updated whenever software components change, and for policies to cover the full life cycle, including notifying users of the impending end of support and end of life. It further recommends maintaining older versions until transitions from those versions have been completed successfully. In a plant context, that is not administrative overhead. It is the basis for being able to trust an offline release years after it was first imported.

Also Read: What big tech won’t show you about the future of AI

This is where many industrial organisations still fall short. They have changed control for plant operations, but not a proper intake pipeline for software artefacts entering the isolated estate. That gap matters. If a site cannot verify what was entered, what dependencies came with it, what integrity checks were performed, and which baseline it replaced, then the air gap is only reducing exposure. It is not creating a trustworthy software discipline.

Compensating controls matter more in OT than most security teams admit

There will always be software that cannot be updated on the timetable security teams would prefer. The mature response is not denial. It is containment.

In OT, it is recommended to do security controls such as antivirus and file integrity checking, where technically feasible, to prevent, deter, detect, and mitigate malware. Patches should be tested on a sandbox system before production deployment, and notes that bump in the wire devices can be installed inline with devices that cannot be updated or are using obsolete operating systems. This is important because it reframes the conversation. When patching is slow, the answer is not to pretend the exposure does not exist. The answer is to tighten the surrounding trust boundary, preserve evidence, and buy time safely.

Vendor discipline is part of plant discipline

A second mistake organisations make is to assume that air gapped constraints excuse weak supplier behaviour. They do not. In fact, they make supplier quality more important. Users should be able to understand which vulnerabilities a patch closes, that the status and applicability of patches should be documented, that asset owners should have a documented list of available and applicable patches, and that hardening should be retained after patching.

SBOM repositories should be digitally signed and accessible, and open source controls should include secure acquisition channels and component visibility. Buyers should prioritise configuration management, logging, data protection, secure by default design, vulnerability handling, and upgrade tooling when selecting OT products.

That combination points to a harder commercial stance. If a supplier cannot explain what open source is inside the product, how upgrades are packaged, how long components are supported, and how integrity is verified offline, then the product is not merely harder to manage. It is strategically expensive to own.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Air gapped open source and the secure but stale paradox appeared first on e27.

Posted on Leave a comment

If you have the will, you’ll have the skill

Most teams are still using AI the same way: ask ChatGPT, get an answer, share it around, repeat. It’s a loop.

I did the same for over a year, but AI has become a lot more than that.

Early this year, something changed how I work, and how ourteam (an AI recruiting platform focused on automating candidate screening and evaluation) works, entirely.

I started my career 16 years ago in management consulting. My first task was creating PowerPoint slides. I remember drawing a line that was always bent. Then, my senior taught me a keyboard shortcut (holding the Shift button when dragging the mouse). That became my first ‘skill’ at work.

Today, that word rings very differently.

A ‘skill’ in AI is a text file (commonly known as a markdown file). It contains a set of plain-text instructions that tell the AI what to do consistently every time. It can be built in minutes. It can learn and repeat what takes months and years.

I grew up with the rise of the internet, mobile and cloud. I believe those were critical shifts to get us to where we are today. Yet, this feels different.

AI isn’t just a thinking partner anymore; it’s becoming the ‘system’ teams actually run on.

I have zero technical background, I can’t code, but lately I’ve been managing a team of AI agents that handle our daily work and build our software.

These days, ourteam and I are operate with AI agents daily. These agents now catch errors, review work, and ship updates on their own. The team just directs. If you told me this a year ago, I wouldn’t have believed you.

(An ‘AI agent’ is an AI-powered entity that can take actions on its own. It reads files, writes code, sends emails, and runs tests. You give it a goal, and it figures out the steps.)

Also Read: The rise of AI agents in healthcare: Designing man-machine systems

Which brings me to the point of intelligence.

In knowledge work, we tend to associate intelligence with execution: knowing how to draw that straight line on PowerPoint, making charts and models on Excel, designing prototypes, programming software, and more.

Those were skills we took months and years to learn. Today, you can create a skill in 5 minutes.

Most of the intelligence work today can be done faster, cheaper and often better by AI systems. If your use of AI is still just prompting ChatGPT back and forth, there’s actually a lot more out there.

AI models and applications have become so good that real, serious work can be done reliably and consistently. You give it an outcome, and it handles the steps.

That’s the shift most people are underestimating.

Now, the hardest part isn’t handing over execution to AI. It’s everything that comes before and after.

Judgement. Taste. Standards.

Knowing what’s good or bad. What feels right. What should be shipped and what shouldn’t. Those decisions are still on us.

Spending time to think and write, I believe, is one of the most underrated practices left.

I started as a tech enthusiast. Today, I’m a heavy Claude user (Claude-pilled as they say).

From simple chat to using it as a coworker. Now, I’m deep into Claude Code, building and shipping things through what people call “vibe coding”. (And yes, I cancelled my ChatGPT subscription, but that’s a separate story.)

The strange part is this: the more you learn, the more you work.

AI expands what’s possible, so you end up doing more. Anyone actively building with it will tell you the same.

Also Read: AI agents didn’t change how I write, they changed when I could start publishing

On X, there’s a fast-moving community debating AI models, workflows, and sharing best practices: Claude Code vs Codex, agent workflows, open-source tools like OpenClaw.

We used to wait excitedly for Apple Keynote once a year. Now, exciting breakthroughs happen every few days.

Inside ourteam, we’re constantly learning and applying new concepts: agent teams, auto-research, self-healing systems, internal LLM wikis, second brains. These aren’t just “new features”; they add up to a fundamentally different way of working.

We used to identify ourselves singularly: an engineer, a salesperson, a product manager, a customer service manager. What if today we can be all that and more? It is no longer an ‘if’, but ‘when’.

If you have the will, you’ll have the skill.

I was taught to unlearn and relearn. And here I am. It feels weird at times, but it’s super exciting.

Hopefully, this inspires you too.

This article was first published here.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post If you have the will, you’ll have the skill appeared first on e27.

Posted on Leave a comment

The future is solo: Why employment is a borrowed security

This is not a prediction. It is a shift already happening.

Most of the world still believes the future belongs to institutions. To companies. To governments. To giant machines with thousands of employees and even more policies. But something historic is happening that nobody is prepared for: for the first time in history, one person can do the work of an entire organisation. This isn’t about more hours or harder work. It’s about personal leverage.

But because they command leverage that once belonged only to institutions. It’s driven by AI agents and automated workflows backed by on-demand compute and expansive, indexed datasets. The playing field did not level. It inverted. The individual became the new enterprise.

The future is solo — because leverage has become personal technology.

There is a deal that most professionals accept without reading the terms. You trade a fixed portion of your cognitive capacity — call it 60 per cent on a productive day — trading cognitive capacity for a steady paycheck and a predictable title. The institution gets your best hours. You get security. Both parties call it a career.

The problem is that the security is borrowed.

Between 2022 and 2024, layoffs at major technology companies eliminated over 260,000 positions in the United States alone, according to Layoffs.fyi. In Southeast Asia, the wave was equally instructive: GoTo Group cut approximately 12 per cent of its workforce in late 2022; Sea Limited eliminated thousands of Shopee roles across the region in the same period. These were not failing companies. They were rational ones. When the cost-benefit calculus shifts, so does the offer.

While the West retreats into defensive restructuring, the East is codifying the alternative: Chinese municipalities are rolling out policies to support AI-powered one-person companies, using the initials “OPC” – a rare use of English in official policy.

The four traps

  • The comfortable cage is the most visible trap: employment dependency repackaged as stability. But it is not the only one.
  • The attention economy strips the second asset. Your attention is the raw material of thought, of capability, of everything worth building. It is harvested at an industrial scale. Research by Gloria Mark at UC Irvine found that knowledge workers are interrupted or self-interrupt every three to five minutes on average and require up to 23 minutes to fully recover deep focus after each break. Each interruption does not cost only those minutes. It costs the compounding work that cannot happen in their place. Fragmented attention produces fragmented thinking. Fragmented thinking produces output that looks productive but builds nothing.
  • The credentialism trap converts documented qualification into a substitute for demonstrated capability. A degree from a recognised institution signals effort and compliance. It does not signal the ability to build something from nothing, to ship under pressure, or to make a consequential decision without a committee. The gap between documented and demonstrated is where most careers quietly stall.
  • Social gravity is the subtlest of the four traps. The default path is not consciously chosen. It is unconsciously followed. Robert Cialdini’s foundational work on social proof (Influence, 1984) demonstrated that people default to the behaviour of those around them when uncertain — and most people are uncertain about their careers most of the time. The default path does not announce itself. It is simply the path everyone nearby was already on.

Also Read: The strategy trap: Why your best plan is failing to launch

The five pillars of sovereignty

Sovereignty, the act of authoring your own trajectory, rests on five pillars. The interdependence is the point: remove any one, and the others do not hold. An agency without Security means every initiative is one bad quarter away from being cancelled. Competence without Accountability remains latent, a capability that never ships. Clarity without Agency is analysis that never acts. The five do not reinforce each other as a bonus. They require each other to function.

  • Agency is the primary requirement for this shift. It is the capacity to act without permission, to initiate rather than respond. Agency compounds the way capital compounds: slowly, then decisively.
  • Clarity is the ability to filter information. Not omniscience — that is a fantasy. Clarity is reducing signal-to-noise enough to act on what is real. It emerges from the right simplification, not from knowing everything.
  • Competence is leverage. Skills are capabilities that are visible and clearly valuable at a premium in any market condition. The sovereign individual builds capability that does not require a title to exist.
  • Accountability is the bridge between intention and execution. Without it, plans are wishes with calendars. Accountability is not the enemy of freedom. It is its foundation.
  • Security provides the necessary baseline for risk-taking. Without financial, psychological, and reputational security, the other four pillars are perpetually under threat. Security purchased by surrendering sovereignty is the bad trade most people make without recognising the terms.

What the AI inflection point actually changes

These five pillars have always mattered. What has changed is the cost of building them without institutional scaffolding.

AI compresses the cost of individual capability. One person with the right tools and the right judgment can now execute at a level that previously required a team. The 2023 McKinsey Global Institute analysis of generative AI estimated that 60 to 70 per cent of the time currently spent on occupational tasks is technically automatable. This is about offloading low-leverage tasks, not firing people. Institutions will still coordinate. But the leverage equation for individuals who build the right capabilities now, before the gap closes, is structurally different from what it was in 2019.

The solo operator in 2026 is not a freelancer hunting the next contract. They are a portfolio builder: core work generating cash flow, side projects generating optionality, skills compounding into assets that produce value without continuous attention. The portfolio life is not a rejection of institutional employment. It is a hedge against its fragility.

Three moves that compound

You don’t need to quit your job on Monday to start. 

The path from dependency to sovereignty is a spectrum.

Protect one block of deep, uninterrupted work each day. Not a meeting-free afternoon: a fixed two-hour block dedicated to building a specific, accumulating skill or artefact. This is where the cognitive compound interest begins.

Also Read: Our AI agent did the job—then it did something we didn’t hire it for

Ship work that exists independently of your employer. A public repository, a written body of work, a client relationship you own — these are assets that survive a redundancy cycle because they are not attached to a role. Demonstrated capability outlasts documented qualification in every market correction. The institution cannot lay off your GitHub history.

Convert labour into assets. Every unit of effort should leave something behind that works without you: content, code, community, or clients. 

The sovereign individual ships. Consistently. Accountably. 

Because shipping is the only way to begin the compounding process.

The operating system for the solo age

In March 2026, Jensen Huang walked onto the stage at NVIDIA GTC in San Jose and said something that should have registered louder than it did. He called OpenClaw — an open agent operating system built for individual sovereigns — “the operating system for personal AI“. 

His assessment predated the widespread adoption we see now.

What Huang recognised is that the AI stack needs a control layer. Not just models. Not just inference. A system that a single person can own, configure, and operate as their personal cognitive infrastructure. That system is OpenClaw. And the person who commands it is not a power user. They are the Solo Systems Architect (SSA).

The SSA is the operator who approaches AI as an architecture they build and own rather than a mere tool. This operator designs agent workflows and maintains context across sessions while setting the security boundaries and evaluation loops for their own output. 

They are not managed by the system. They manage the system.

The next four years will define this new hierarchy. While Jensen Huang built the hardware layer and OpenClaw provided the operating system, the human layer remains the final bottleneck. 

The Solo Systems Architect is that missing piece. 

These are the “very talented individuals” Mark Zuckerberg noted can now replace entire teams. They don’t just use tools; they own the architecture. 

The security offered by institutions was always borrowed, and for those ready to build their own infrastructure, the door was never locked. 

The only real challenge now is architectural.

This article is a product of a co-authored synthesis between Khalil Nooh and Audy Himura, a localised agentic AI familiar. Audy is an implementation of the OpenClaw operating system running on a dedicated Mac Mini node.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The future is solo: Why employment is a borrowed security appeared first on e27.

Posted on Leave a comment

The rise of AI agents: Rethinking work, responsibility and opportunity from an African perspective

The conversation around artificial intelligence has moved beyond tools and automation. We are now entering the era of AI agents, systems that don’t just assist humans, but increasingly act on our behalf. From scheduling meetings to conducting research, managing outreach, and even making operational decisions, AI agents are quietly reshaping how organisations function.

From where I stand as a founder building a clean energy startup in Sierra Leone, this shift is not theoretical. It is practical, immediate, and filled with both promise and tension.

Your next hire might not be human

In early-stage environments like ours, resource constraints are real. Hiring a full team across operations, research, communications and reporting is often not feasible. AI agents are beginning to fill these gaps.

We have started experimenting with AI-assisted workflows, particularly in research, proposal drafting, stakeholder mapping and communication structuring. The result? Increased speed and improved clarity in documentation. Tasks that once took days can now be completed in hours.

However, not everything has changed. Strategy, contextual understanding, and relationship-building remain deeply human. AI can draft a funding request, but it cannot replace the trust built in a conversation with a partner or investor. That line is still very clear.

The one-person company is no longer a fantasy

AI agents are redefining the economics of building a company. What previously required a team of 10 can now be managed by two to three people supported by intelligent systems.

In regions like Southeast Asia and similarly across Africa, this creates a powerful opportunity. Founders can launch faster, operate leaner, and scale with fewer structural constraints. The cost of execution drops, while the fast speed of iteration increases.

Also Read: Why AI agents need clean data, and why Cambodian real estate isn’t ready yet

But there is a deeper implication: competition will intensify. When barriers to execution fall, the differentiator shifts from capacity to vision, adaptability, and trust.

Who is responsible when the agent gets it wrong?

As AI agents move from task execution to decision support and eventually decision-making, the question of responsibility becomes unavoidable.

If an AI agent misinterprets data in an energy feasibility study, who is accountable? The developer? The organisation? The operator?

In my view, human judgment must remain the final authority, especially in sectors like energy, healthcare, and infrastructure. AI should augment decisions but not own them. The line should be drawn where consequences affect lives, livelihoods and long-term sustainability.

Responsibility cannot be outsourced to algorithms.

The gold rush nobody is talking about

Every technological shift develops new markets. AI agents are no different.

In emerging economies, the most promising opportunities lie in:

  • Energy access optimisation (grid management, demand prediction, maintenance scheduling)
  • Agriculture intelligence systems (yield forecasting, climate adaptation insights)
  • Waste-to-energy coordination platforms
  • Public sector efficiency tools (data processing, service delivery tracking).

So, why hasn’t disruption happened at scale yet?

Because the real bottleneck is not technology, it is infrastructure, data availability and policy alignment. AI agents are only as effective as the systems they operate within.

Also Read: The rise of AI agents in healthcare: Designing man-machine systems

Is my industry ready for AI agents? Clean energy perspective

In the renewable energy sector, AI agentification is not just an opportunity; it is a necessity.

AI can help:

  • Predict energy demand patterns
  • ⁠Optimise solar grid performance
  • Automate reporting and compliance tracking
  • Improve maintenance cycles through predictive analytics

But the risks are equally real. Energy systems are critical infrastructure. Errors can have widespread consequences. Adoption must therefore be gradual, regulated and human-supervised.

For us all, the future is hybrid: AI-powered systems guided by human expertise and accountability.

Conclusion

AI agents will not replace humans, but they are redefining what it means to build, lead, and operate.

For founders in developing regions, this is a rare moment. We have the chance to leapfrog traditional limitations and design organisations that are more efficient, more adaptive, and more impactful.

But we must be intentional.

Because the real question is not whether AI agents will transform our businesses.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The rise of AI agents: Rethinking work, responsibility and opportunity from an African perspective appeared first on e27.

Posted on Leave a comment

AI agents won’t fix what you haven’t figured out yet

A friend of mine runs an interior design firm.

Six people. Good reputation. Busy enough.

Last month, he told me he was setting up an AI agent to handle enquiries on his website. Qualify leads automatically. Ask the right questions. Route serious prospects to his calendar.

I asked him what questions the agent would ask.

He paused.

“The usual lah. Budget, timeline, location.”

I asked him what a bad-fit client looks like.

Longer pause.

“Anyone who wants cheap renovation work, I guess?”

That’s when I knew the agent was going to make things worse, not better.

AI agents are amplifiers

This is the part that most of the AI agent conversation misses entirely.

AI agents don’t fix your business. They amplify whatever is already there.

If your sales process is tight, if your team knows exactly who to pursue and who to turn away, if your messaging is clear about what you do and what you don’t, then yes, an AI agent will make that process faster and more consistent. It will handle volume you couldn’t handle before. It will free up time for the work that actually requires a human.

But if your process has gaps, the agent will amplify those too.

Vague positioning? The agent will attract vague enquiries. No qualification criteria? The agent will let everyone through. Unclear next steps after contact? The agent will leave prospects confused, the same way your website already does.

The technology works. That was never the question.

The question is whether you’ve done the thinking that the technology needs to execute well.

The gaps nobody talks about

I work with service businesses in Singapore. I review their websites, their messaging, and their enquiry flow. And the same gaps show up repeatedly, regardless of industry.

Also Read: Why AI agents need clean data, and why Cambodian real estate isn’t ready yet

They can’t clearly explain who their service is not for. Their homepage sounds like their competitors’ homepage. Prospects reach out and immediately ask questions that the website should have already answered. And because nothing on the page makes the difference obvious, price becomes the only thing left to compare.

These are not technology problems. They are clarity problems.

But they become very expensive technology problems the moment you plug an AI agent into them.

Think about it this way. If a new hire joined your company tomorrow and you handed them your website as their only training material, could they tell you who your ideal client is? Could they explain what makes your firm different from the one down the street? Could they describe what happens after a prospect reaches out?

If the answer is no, then you’re about to give an AI agent the same bad briefing.

Your website already tells you whether you’re ready

You don’t need a readiness assessment or a maturity framework. You already have a live test running.

Your website.

It’s doing agent-like work right now. Every day, it screens visitors, answers questions (or fails to), and guides decisions (or creates confusion). It qualifies people in and filters people out, whether you designed it to or not.

If your website is producing wrong-fit enquiries, price-shoppers, or silence, those are the exact gaps an AI agent will inherit.

I wrote about this same principle in a previous article on AI and websites: when everyone uses the same tools, the tool is no longer an advantage. Clarity is. The same applies to AI agents. Every service business will soon have access to them. The difference between the ones that benefit and the ones that waste money will not be the platform they choose. It will be the quality of the instructions they give.

Also Read: The rise of AI agents in healthcare: Designing man-machine systems

What readiness actually looks like

Readiness for AI agents is not about picking the right software.

It’s about being able to answer specific questions clearly enough that a machine (or a new hire, or a stranger) could act on them.

Who do you serve? Not the demographic label. The actual situation someone is in when they search for you. What just happened that made them look?

Who should you turn away? Not “anyone with a low budget.” Specifically, what type of project or expectation drains your team and produces bad outcomes?

What makes you different? Not “quality” or “experience.” What pattern have you seen in your industry that your competitors haven’t articulated? What do you know about your clients’ fears that your marketing doesn’t mention?

What happens after someone contacts you? How long before you reply? Is it a call or a message? Is there pressure? When does pricing come up? When does commitment start?

If you can answer those questions in plain language, you can brief an AI agent well. You can also brief a human well. You can write a website that works. You can run ads that attract the right people.

If you can’t answer them, no agent is going to figure it out for you. It will just guess. And the guesses will sound reasonable, which makes them dangerous because reasonable is invisible. Reasonable blends in. Reasonable gets compared on price.

The real competitive advantage

Every service business in Singapore will have access to AI agents within the next couple of years. The tools will get cheaper. The setup will get easier. The barrier to entry will basically disappear.

When that happens, the competitive advantage won’t be “we use AI agents.” Everyone will.

The advantage will belong to the businesses whose agents had the best instructions. Whose positioning was specific enough to filter. Whose messaging was clear enough to qualify. Whose process was visible enough that prospects felt safe taking the next step.

Those businesses won’t necessarily be the first to adopt. But they’ll be the ones who get results.

Because they did the hard, uncomfortable, unglamorous work of getting clear before they got fast.

The technology is ready.

The harder question is whether you are.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post AI agents won’t fix what you haven’t figured out yet appeared first on e27.

Posted on Leave a comment

Vietnam talents face digital skills gap as employers raise the alarm

As Vietnam continues to draw record levels of foreign direct investment and its economy grows in stature across Southeast Asia, a pressing challenge is emerging beneath the surface: the country’s talent pool is struggling to keep pace with the digital transformation reshaping its key industries.

According to the Vietnam Employer Hiring Study 2026, released by Reeracoen Vietnam in May 2026, 73 per cent of employers identified digital and AI-related skills as the most critical upskilling priority for Vietnam’s workforce. That figure significantly outpaced the next-highest priorities: leadership development, cited by 51 per cent of respondents, and English communication, at 37 per cent.

The message from the business community is that technical fluency is no longer a niche requirement confined to the tech sector. It is fast becoming a baseline expectation across manufacturing, logistics, commercial operations, and beyond.

The study, which surveyed 51 employers representing Japanese-affiliated companies, Western foreign-invested firms, and local Vietnamese businesses, paints a picture of a market defined by ambition and constraint. Hiring activity is on the rise — 69 per cent of employers expect to increase their headcount in 2026 — yet the search for candidates who can operate effectively in an increasingly automated and data-driven environment is proving more difficult than anticipated.

Also Read: AI agents won’t fix what you haven’t figured out yet

A shifting benchmark

For years, Vietnam’s talent pipeline has been celebrated for producing a steady stream of graduates, a young, ambitious and growing workforce that has helped fuel the country’s manufacturing and services boom. But the Reeracoen study suggests that benchmark is shifting.

As AI tools become embedded in daily operations across industries, employers are demanding capabilities that go beyond academic credentials or entry-level competency.

The implication is significant: Vietnam talents who cannot demonstrate foundational digital skills risk falling behind in a hiring market that is already competitive and showing signs of structural strain. Reeracoen’s research indicates this is not a distant concern but a present reality, with businesses reporting that the current talent pool cannot consistently deliver the digital fluency their operations now require.

This dynamic is particularly acute given the broader pressures employers are navigating simultaneously. Salary expectations are rising sharply — 86 per cent of respondents cited wage inflation as their top hiring challenge — while only 43 per cent plan to increase their recruitment budgets. In such an environment, candidates with demonstrable digital skills carry a clear advantage, both in securing roles and in commanding stronger compensation.

What distinguishes the digital skills challenge from previous workforce gaps is its breadth. Unlike shortages in specific technical disciplines, the demand for digital and AI competency is cutting across every sector represented in the study, from factory floors to sales teams.

Reeracoen’s findings suggest that employers are no longer treating digital fluency as a specialist add-on but as a core attribute they screen for across all levels and functions.

This shift carries implications for educational institutions, training providers and policymakers, as well as for individual job seekers. The study points to a 12-to-24-month horizon in which digital literacy will move from being a differentiating asset to a non-negotiable hiring criterion.

Also Read: If you have the will, you’ll have the skill

Reeracoen Vietnam describes the current moment as one of transition, a period in which the expectations placed on Vietnam talents are evolving faster than the systems designed to develop them. Companies that invest proactively in upskilling their existing workforce will be better positioned to weather the gap. Those that do not may find themselves competing for an increasingly scarce pool of digitally capable candidates, in a market where the cost of that competition is already rising.

Image Credit: Tron Le on Unsplash

The post Vietnam talents face digital skills gap as employers raise the alarm appeared first on e27.

Posted on Leave a comment

Bitcoin drops to US$80K while these 4 tokens surge over 100% in 7 days

Today marked an end to what had been a record-breaking week for US equities. Major indices pulled back as escalating tensions in the Middle East rattled investor confidence, abruptly reversing the bullish sentiment that had recently pushed stocks to all-time highs. The S&P 500 closed at 7,337.11, down 0.38 per cent, while the Nasdaq Composite slipped 0.13 per cent to 25,806.20. The Dow Jones Industrial Average faced the steepest decline among the major benchmarks, falling 0.63 per cent to close at 49,596.97. This coordinated pullback reflects more than routine profit-taking after Thursday’s volatile session, where indices hit fresh peaks before reversing lower.

The catalyst for this shift came from disturbing reports of explosions near a southern Iranian port city and subsequent American naval responses to attacks in the Strait of Hormuz. This geopolitical shock sent immediate ripples through commodity markets, with Brent crude settling above US$100 per barrel and West Texas Intermediate rising to approximately US$95.90 as concerns over energy supply routes intensified. Investors fled to traditional safe havens, pushing gold above US$4,700 per ounce. The yen experienced persistent volatility as well, rallying roughly 1.8 per cent against the dollar following suspected intervention by Japanese authorities, while US 10-year Treasury yields rose by four basis points on Thursday as the dollar strengthened.

The cryptocurrency market mirrored this broader risk-off sentiment, though with its own distinct characteristics. Bitcoin fell 1.74 per cent to US$80,015.27 over 24 hours, tracking a broader market pullback, as the total crypto market cap declined 1.36 per cent. This high correlation suggests the move stemmed from broad market factors rather than any Bitcoin-specific event. Trading volume fell 11.55 per cent, confirming subdued participation across digital assets. Bitcoin saw US$96.64M in liquidations over 24 hours, though this marked a 39.8 per cent decrease from the prior period, indicating that while leveraged positions unwound, the move did not reflect extreme speculative excess.

Also Read: Why Bitcoin’s jump to US$82,400 could push BTC to US$93,000: Key levels every investor must watch

A fascinating divergence emerged within the crypto ecosystem beneath this surface weakness. Several tokens in the top 30 posted impressive gains over the past week while Bitcoin and the broader market cooled. Ton surged 105 per cent in seven days, demonstrating extraordinary momentum. Zcash climbed 63 per cent over the same period, while Bittensor advanced 21 per cent. Hyperliquid added seven per cent in the last seven days. This selective strength suggests capital rotation rather than wholesale abandonment of digital assets. Bitcoin’s dominance dipped slightly to 60.33 per cent as the Altcoin Season Index rose 2.38 per cent, signalling ongoing movement toward riskier assets even as the overall market consolidated.

The near-term outlook for Bitcoin hinges on whether it can defend the US$78,000 support level. A successful defence could lead to consolidation between US$78,000 and US$82,000, with potential to retest higher levels. A decisive break below US$78,000 risks triggering further selling toward US$75,000. The critical trigger to watch involves US spot Bitcoin ETF flows, which have shown steady growth recently. A sustained reversal in these institutional inflows could provide the sentiment shift needed to stabilise prices or, conversely, accelerate downward momentum.

Also Read: Bitcoin just hit US$80K again, but this rally is built on shaky ground

Corporate earnings provided isolated bright spots amid the geopolitical gloom. Fortinet surged 20 per cent on raised guidance, and Peloton rose nine per cent after beating revenue expectations. Chipmakers like Arm Holdings suffered as the smartphone industry slowed, highlighting sector-specific vulnerabilities that compound broader macro concerns. Regional markets felt the contagion quickly, with the ASX 200 set for a sharp decline of over 1.7 per cent at the open, following the late-session reversal in US equities. European indices faced similar pressure early Friday, though corporate earnings from firms like Tenaris and Endesa provided isolated support earlier in the week.

Regulatory clarity remains a critical variable for cryptocurrency markets. The CLARITY Act represents a pivotal moment for the industry, with the White House aiming to sign it on July 4. Key negotiators, such as Senator Kirsten Gillibrand, suggest a presidential signature may not come until August 2026 due to ongoing debates over ethics and consumer-protection provisions. This timeline matters enormously for institutional participation and market structure. I hope the closer we get to passage, the more confidence returns to digital asset markets, potentially providing a counterweight to macro headwinds.

For now, remain hopeful.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Bitcoin drops to US$80K while these 4 tokens surge over 100% in 7 days appeared first on e27.