Posted on Leave a comment

The hidden cost of AI coding: Why proof will matter more than prompts

AI coding tools turned software output into a speed story. A developer can sketch a product in the morning and push a working build before dinner. That is why vibe coding spread so fast, even as security researchers warned that AI-generated code can widen software supply chain risk.

The part most people missed sits behind the prompt box. In many AI coding stacks, code, prompts, and usage data can pass through outside platforms, cloud infrastructure, or model-provider systems. 

For a startup hacking on a landing page, that may feel tolerable. For a bank, fintech, or fund, it can open a path to IP loss, audit trouble, and valuation damage.

An alarm bell from his own workflow

The concern started from a personal place. I had been using AI coding tools on quant trading systems, then realised the privacy settings behind those tools deserved a much closer look. This is my life’s work. How am I supposed to feel about this?

One example of that concern reflected in policy is Cursor’s data-use page. It says that if Privacy Mode is turned off, it may use and store codebase data, prompts, editor actions, code snippets, and other code data to improve features and train models. Requests still pass through its backend, even when a user brings an API key of their own.

The rules also change depending on which product is in the chain. OpenAI states it doesn’t train on business data by default, and Anthropic shares the same for its commercial products. Consumer products and third-party access follow separate terms, which leaves enterprises sorting through a patchwork of settings, vendors, and responsibilities.

Also Read: Can you build an app without coding? My experiment might surprise you

Why this hits finance harder

A code leak is not just a developer problem in regulated sectors. A financial codebase can hold client identifiers, internal controls, pricing logic, fraud rules, risk models, and trading strategies. Put differently, source code carries business logic, internal workflows, architecture decisions, and years of engineering work. Once it leaves a company’s control, the damage can spill into customer trust, due diligence, compliance, and enterprise value.

90 per cent of security professionals say insider attacks are as hard as or harder than external ones to detect; 72 per cent of organisations still cannot see how users interact with sensitive data across endpoints, cloud apps, and GenAI platforms.

And that pressure is meeting a tougher legal climate. 2025 marked the move from AI hype to AI accountability, with regulators in the U.S. and EU shifting toward enforcement and compliance deadlines. In Europe, the Digital Operational Resilience Act makes clear that financial entities remain fully responsible for their obligations, including when ICT services are outsourced.

Visibility is also getting worse as AI systems touch more of the workflow. Only 21 per cent of organisations maintain a fully up-to-date inventory of agents, tools, and connections, leaving 79 per cent operating with blind spots. Nearly 40 per cent of enterprise AI interactions now involve sensitive data, including copied text, pasted content, and file uploads.

What’s the pitch to non-technical executives

Let’s frame the risk in business terms. Using AI means that you are sending data to whoever is providing the model, or the platform, and potentially also whoever is providing the infrastructure. 

The big question for executives, in my view, is whether they are comfortable with that chain seeing, storing, or learning from their most valuable data.

Also Read: From chatbots to vibe-coding: 3 AI experiments that changed my investment strategy

The answer changes fast here. Financial and regulated firms can’t afford the ‘move fast and break things’ approach that many AI tools implicitly encourage. More often now, regulators, buyers, and internal security teams want a clear record of where data went, who touched it, and what evidence exists afterwards.

The next premium in AI: Controlled execution?

The market has already rewarded speed. The next premium may go to platforms that keep the speed and clear security review, also giving compliance teams evidence they can stand behind. That is a finance story as much as a tech one, because budgets, contracts, due diligence, and enterprise value tend to follow tools that reduce uncertainty instead of adding another black box.

AI can clearly write code. But where does that code travel? Who can inspect the path? What proof is left behind when the work is done? Those are the sharper questions for boards, CFOs, CISOs, and investors.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post The hidden cost of AI coding: Why proof will matter more than prompts appeared first on e27.

Posted on Leave a comment

Cybersecurity strategies for startups on a budget

Digital evolution worldwide has been rapid over the past few decades. Startups are increasingly transitioning from local to regional and even global market presence, underscoring the opportunities that digitisation at scale has brought. This development has made cybersecurity a key pillar of effective business governance in the modern age. 

Today, having a robust cybersecurity ecosystem ensures that startups preserve stakeholder trust. Thankfully, small businesses no longer require a large capital investment to build a defensible and modern security posture. With a strategic approach, high impact and low cost can exist simultaneously.

Shifting from a reactive mindset to a proactive one

The cybersecurity industry has largely shifted from focusing solely on prevention toward building resilient and proactive models. Emphasis on detection and recovery has become an important measure of a business’s longevity. Adopting this philosophy is critical for emerging businesses, which are often viewed as particularly vulnerable. 

For startups, effectively protecting sensitive customer information can be the difference between long-term growth and reputational damage. Entrepreneurs who focus on security early on find it easier to navigate regulatory requirements and secure partnerships with large organizations. Fortunately, building resilient systems is more about continuous education and operational improvement than it is about heavy capital expenditure. 

Strengthening identity and access control

Identity is the primary focus area for building modern cybersecurity systems. With more businesses migrating to cloud-based ecosystems, user account management is now the most important line of defense. Implementing Multi-Factor Authentication (MFA) is the most effective and low-cost approach available. By adopting a second form of verification, organizations can prevent approximately 99 per cent of account takeover attacks. 

Having centralized password management is also essential. Making employees remember complex passwords leaves room for reuse across personal and professional platforms. Tools such as Bitwarden or Keeper help startups create original and complex passwords, preventing mix-ups. This ensures that a breach on third-party platforms does not lead to deeper internal entry. Such subscriptions are inexpensive for the depth of protection they provide. 

Also Read: AI vs AI: Inside Southeast Asia’s new cybersecurity war

Managing the hybrid work perimeter

Flexible work arrangements, such as hybrid or fully remote models, are becoming a defining feature across industries worldwide. Many startups leverage remote talent to stay competitive, but this decentralised model introduces risks as employees access data from unsecured home networks. In fact, of those working from home, 25 per cent of employees are unaware of their device’s security protocols. Startups must rethink data protection outside the office.

Organisations should implement Virtual Private Networks (VPNs) and cloud-based security layers to protect data outside the office. As cyberresilience becomes a higher priority in remote work environments, defining clear remote work policies and educating employees about the risks of unsecured public Wi-Fi are critical, and fortunately, low-cost.

Continuous digital hygiene and automated patching

In 2026, the speed of digital attacks increased, often aided by automated tools that scan for known vulnerabilities. Keeping all software and applications up to date is a nonnegotiable task. Many regional incidents occur because a business delayed a critical update to avoid a minor disruption, only to leave a vulnerability exposed to opportunistic attackers.

Also Read: How cybersecurity companies can build trust through digital PR

Automated patch management is a cost-effective way to mitigate disruptions caused by outdated software. Most modern platforms offer auto-update features that require minimal configuration. For startups managing cloud infrastructure, using managed services that handle security updates can offload significant technical risk. Maintaining a high standard of digital hygiene ensures the company is not “low-hanging fruit” for the scripts and ransomware variants currently affecting small and medium-sized enterprises.

Leveraging frameworks and local compliance

Founders do not need to build security policies from scratch. Numerous free frameworks provide a roadmap for improving security. The NIST Cybersecurity Framework is a globally respected standard, but regional alternatives provide specific guidance. For example, business owners and IT teams in Singapore should seek the government-created Cyber Essentials Mark to align with region-specific standards. 

Also Read: Code, power, and chaos: The geopolitics of cybersecurity

Adhering to these frameworks also helps with data sovereignty. As countries across Southeast Asia strengthen data governance and protection practices, businesses in the region must demonstrate a baseline of security to remain compliant and avoid fines. Compliance is also a competitive advantage — it signals to enterprise clients and investors that the startup is a mature, responsible partner.

The 3-2-1 backup and recovery strategy

No security system is impenetrable, making a robust backup strategy the ultimate safety net. The “3-2-1” rule remains the industry standard — at least three copies of data on two different media, with one copy kept off-site. This ensures that even during a ransomware attack or hardware failure, the business can be restored without paying a ransom.

Regularly testing the recovery process is as important as the backup itself. Many organisations realise too late that their backups were corrupted or that recovery is too slow. Performing a “fire drill” once or twice a year ensures the team knows how to get the business back online within hours. Preparedness is often the difference between a minor incident and a terminal business failure.

Fostering a culture of security and resilience

Ultimately, technical tools are only as effective as the people using them. Building a culture where every team member feels responsible for security is the most cost-effective long-term strategy. By educating their employees about the key strategies and frameworks for modern cybersecurity, startups can ensure company-wide safety without costing a fortune.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Cybersecurity strategies for startups on a budget appeared first on e27.

Posted on Leave a comment

How SaaS companies are valued: Why the multiple is only the surface story?

One of the most persistent myths in tech is that SaaS valuation is a simple formula. Take Annual Run Rate (“ARR”), apply a market multiple, and you have your answer.

It is a useful shortcut. It is also how founders end up misunderstanding what their company is actually worth.

Yes, SaaS businesses are often discussed in terms of ARR multiples. But in real transactions, especially exits, the multiple is not the valuation logic. It is the output of it. What buyers are really valuing is the quality of the revenue, the durability of growth, the efficiency of the model, and the type of transaction being done.

That distinction matters because two SaaS companies with the same ARR can produce very different outcomes in the market.

The first point is straightforward: recurring revenue matters more than revenue in general. For most SaaS businesses, valuation is anchored on ARR, not total revenue. That is because recurring subscription revenue is the part that a buyer can actually underwrite with some confidence. It is predictable, repeatable, and, if the business is healthy, compounding.

By contrast, implementation fees, consulting income, or one-off project work may still be commercially useful, but they rarely deserve the same multiple. A company with US$10 million in total revenue, of which US$8 million is recurring, should not expect to be valued the same way as a company with US$10 million in which half the revenue comes from non-recurring services. The first looks like a software asset. The second may still be a good business, but it is not as clean a recurring one.

But even that is only the starting point.

What really separates SaaS businesses in valuation is not just the amount of ARR, but the quality of that ARR. And the clearest signal of quality is retention.

This is where many founders become overly optimistic. They see recurring billing and assume the market will view their revenue as durable. Buyers do not think that way. They look at churn first. If customers are leaving too quickly, the business is not truly compounding. It is just running hard to replace what is already falling out of the bottom.

Also Read: The autonomous agent paradigm: Meta’s Manus acquisition, MCP integration, and the disruption of SaaS

As a practical benchmark, SMB SaaS volume churn should generally not be more than three per cent per month. Enterprise SaaS should be far tighter, ideally with near-zero volume churn across core accounts. The exact number is not the whole point. The principle is: Retention is a proxy for stickiness, product relevance, and how deeply the software is embedded in customer workflows.

In plain English, buyers pay more for revenue that stays.

That also means an average-growth company with excellent retention can be worth more than a faster-growing business with weak customer durability. Founders often overemphasise growth and underappreciate the penalty the market places on churn. But a leaky SaaS business is not a premium SaaS business, no matter how strong the top-line story sounds in a deck.

Growth still matters, of course. A company growing more than 300 per cent year-on-year will usually attract more attention than one growing at 50 per cent. Faster growth often supports a higher multiple because it suggests a bigger future revenue base and a stronger competitive position.

But growth is not one thing. Buyers care about growth quality.

Was growth driven by healthy demand and repeatable customer acquisition, or by unsustainably high sales and marketing spend? Was it supported by strong expansion within existing accounts, or did it depend on heavy discounting just to win new logos? Is the growth durable, or did the company simply pull revenue forward?

These are not academic questions. They directly shape valuation. High growth with poor retention and weak economics is less impressive than founders like to think. High growth with strong retention and efficient acquisition is where the real premium sits.

This leads to another factor that founders consistently underestimate: margins and unit economics.

Software is attractive because it should scale. That does not mean every SaaS company automatically deserves a strong valuation. Buyers will still look closely at gross margins, customer acquisition cost, payback periods, and overall operating leverage. If the business needs too much spending to maintain growth, or if margins remain thin despite scale, the valuation logic weakens. A recurring revenue business with poor unit economics is not a great asset just because it invoices monthly.

Also Read: The agent swarm is unleashed on SaaS

So when people ask how SaaS companies are valued, the better answer is this: not by ARR alone, but by the quality of the machine producing that ARR.

That machine is judged across four big dimensions.

  • First, how much revenue is truly recurring.
  • Second, how sticky that revenue is.
  • Third, how durable and efficient the growth is.
  • Fourth, whether the economics prove the business can scale.

Only after that does the multiple make sense.

Where this becomes more interesting is when founders confuse fundraising valuation with exit valuation. The two are related, but they are not the same exercise.

In a VC fundraising round, the valuation often reflects future potential more than present-day operating quality. Investors may be willing to pay up because they believe the company could become a category winner, dominate a large market, or grow into a strategically important platform. The valuation is often shaped by what the company might become.

In an exit, especially in M&A, the lens is more grounded. Buyers are usually paying for what exists today, adjusted for what they believe they can realistically achieve after closing. That makes M&A valuation more closely linked to current performance, risk, and transaction logic.

Also Read: The rise of one-person AI companies and why micro-SaaS is at the centre of it

Put differently, fundraising tends to reward possibility. Exits tend to reward evidence.

This is why founders should be careful when using private fundraising rounds as reference points for what their company should be worth in a sale process. A VC may tolerate messy retention, thin margins, or heavy burn if the upside is large enough. An acquirer, particularly one writing a real cheque to buy control, will usually be more disciplined.

Even inside M&A, not all buyers think alike.

A strategic acquirer may pay more because your product fills a capability gap, gives them access to a highly relevant customer base, or creates synergies across product, distribution, or go-to-market. They are not only buying your standalone cash flow. They may also be buying what your company unlocks inside their broader machine.

A financial buyer, by contrast, is usually more disciplined on headline multiple. They will focus more tightly on retention, margins, cash flow profile, and whether the growth engine is efficient enough to support an investment case. That does not mean they always pay less. It means their logic is usually more rooted in the business as an asset, rather than in strategic overlap or synergy.

So the same SaaS company can produce very different valuations depending on whether the buyer is strategic or financial.

And then there is deal structure, which founders often ignore until it is too late.

A headline valuation is not the same as bankable value. If a buyer offers a rich number, but much of the consideration comes in shares rather than cash, the economics become much less certain. A share swap may look attractive on paper, especially if the acquirer is growing quickly or trades well publicly. But it also means the seller is taking future performance risk, liquidity risk, and market risk on the buyer.

An all-cash offer at a slightly lower headline valuation may, in practice, be worth more because the proceeds are real, immediate, and certain. The same logic applies to earn-outs, deferred payments, and other structured consideration. Founders should not just ask what the price is. They should ask what form the price takes, when it is paid, and what has to happen before it becomes real.

This is why transaction context matters so much. Market benchmarks can tell you where comparable businesses may sit. But actual outcomes depend on buyer fit, competitive tension, and structure. A strong strategic fit with multiple interested buyers can move valuation above generic benchmarks. A single-bid process with messy diligence and weak retention can drag it below them very quickly.

Also Read: I built an AI agent for myself — it became a 2,000-user micro-SaaS

The uncomfortable truth is that SaaS valuation is less about memorising what multiple the market is paying and more about understanding why one business deserves that multiple while another does not.

Founders who want to improve valuation should stop asking only, “What are SaaS companies trading at?” and start asking better questions.

  • How much of my revenue is truly recurring?
  • How strong is retention by segment and cohort?
  • Is our growth efficient, or just expensive?
  • Do our margins support the software story?
  • Would a buyer see this as a durable asset, or as a promising but risky one?
  • And if I do get an offer, how much of it is actually cash?

That is the real lens.

The market may speak in multiples. But deals are done on quality, confidence, and structure. Founders who understand that early will prepare differently and, usually, negotiate better.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post How SaaS companies are valued: Why the multiple is only the surface story? appeared first on e27.

Posted on Leave a comment

Air gapped open source and the secure but stale paradox

There is a familiar comfort in industrial environments that still keeps critical systems isolated from the outside world. The argument sounds sensible. If the plant is air gapped, exposure is lower. If exposure is lower, updates can wait. If updates can wait, stability wins. That logic has carried many sites for years, but it is becoming harder to defend as open source components sit deeper inside historians, engineering workstations, remote access stacks, vendor appliances, monitoring tools, and the layers around control.

In operational technology, it is clear that outages are often unacceptable, must be planned days or weeks in advance, software changes must be thoroughly tested, and deployed technology often remains in service for 10 to 15 years or longer. It is also important to note that OT frequently relies on older operating systems that may no longer be supported. That is the environment in which the paradox appears. The safest plant is not always the one that updates most often, but it is also not the one that quietly ages into unmanageable software risk.

That is why the phrase secure but stale matters. In plants, stale software is rarely the result of negligence alone. It is often the result of rational operational discipline. The trouble is that rational local decisions can create strategic drift. A component that was acceptable when commissioned can become difficult to patch, harder to support, and poorly understood by the people still operating it years later. This is not a niche problem. It is part of the structural difference between IT and OT.

The wrong objective is patch speed

Many security discussions still assume that the right answer is to push plants closer to enterprise patching cycles. That is usually the wrong lesson. In industrial settings, speed without operability becomes its own risk. Software updates in OT cannot always be implemented on a timely basis, need vendor and end user testing, and may require revalidation with control engineers, security teams, and IT working together. If leaders ignore that and set patch velocity as the headline metric, they will force either unsafe change or quiet non-compliance. Neither outcome is mature.

A better objective is controlled freshness. By that I mean something more realistic than always current and more responsible than indefinitely deferred. Controlled freshness means every open source component has a known origin, a known owner, a known operational purpose, and a known path to replacement or containment. That is a more serious standard for plants because it respects the reality of shutdown windows while refusing blind trust as a long-term operating model.

Also Read: How to navigate the investment opportunity in climate tech sector

Many software supply chain guidance points in exactly this direction. It treats SBOM, vendor risk assessment, open source controls, and vulnerability management as complementary capabilities, not substitutes, and it stresses that open source provenance, integrity, support, and maintenance are often not well understood or easy to discover.

Open source is not the problem, unmanaged open source is

There is no value in pretending plants can avoid open source. They already depend on it, often indirectly. The real issue is that many sites do not know precisely where it sits, which versions are deployed, or whether a vendor appliance that looks closed is in fact carrying a stack of ageing open components underneath.

Organisations should understand suppliers’ use of open source components, acquire those components through secure channels from trustworthy repositories, maintain sanctioned internal repositories, and use hardened internal repositories or sandboxes before introducing components into development environments. It also says that when no vendor-supplied SBOM exists, organisations should perform binary decomposition to generate SBOMs for legacy software where technically and legally feasible.

That changes the leadership question. The issue is no longer whether a plant uses open source. The issue is whether the organisation has operationally useful visibility into that open source estate. In practice, that means knowing which components matter enough to affect production, safety, recovery, vendor support, or incident response. Perfect visibility can wait. Actionable visibility cannot.

The real control layer is the offline intake model

Air-gapped environments need a better software intake discipline than most enterprises because they cannot rely on frequent corrections later. The strongest plants do not treat updates as downloads. They treat them as engineered releases.

The Secure Software Development Framework is helpful here because it is not written only for fast-moving cloud products. It recommends release integrity verification, including cryptographic hashes and code signing, and it says organisations should securely archive each software release together with integrity verification information and provenance data.

It also calls for provenance data to be maintained and updated whenever software components change, and for policies to cover the full life cycle, including notifying users of the impending end of support and end of life. It further recommends maintaining older versions until transitions from those versions have been completed successfully. In a plant context, that is not administrative overhead. It is the basis for being able to trust an offline release years after it was first imported.

Also Read: What big tech won’t show you about the future of AI

This is where many industrial organisations still fall short. They have changed control for plant operations, but not a proper intake pipeline for software artefacts entering the isolated estate. That gap matters. If a site cannot verify what was entered, what dependencies came with it, what integrity checks were performed, and which baseline it replaced, then the air gap is only reducing exposure. It is not creating a trustworthy software discipline.

Compensating controls matter more in OT than most security teams admit

There will always be software that cannot be updated on the timetable security teams would prefer. The mature response is not denial. It is containment.

In OT, it is recommended to do security controls such as antivirus and file integrity checking, where technically feasible, to prevent, deter, detect, and mitigate malware. Patches should be tested on a sandbox system before production deployment, and notes that bump in the wire devices can be installed inline with devices that cannot be updated or are using obsolete operating systems. This is important because it reframes the conversation. When patching is slow, the answer is not to pretend the exposure does not exist. The answer is to tighten the surrounding trust boundary, preserve evidence, and buy time safely.

Vendor discipline is part of plant discipline

A second mistake organisations make is to assume that air gapped constraints excuse weak supplier behaviour. They do not. In fact, they make supplier quality more important. Users should be able to understand which vulnerabilities a patch closes, that the status and applicability of patches should be documented, that asset owners should have a documented list of available and applicable patches, and that hardening should be retained after patching.

SBOM repositories should be digitally signed and accessible, and open source controls should include secure acquisition channels and component visibility. Buyers should prioritise configuration management, logging, data protection, secure by default design, vulnerability handling, and upgrade tooling when selecting OT products.

That combination points to a harder commercial stance. If a supplier cannot explain what open source is inside the product, how upgrades are packaged, how long components are supported, and how integrity is verified offline, then the product is not merely harder to manage. It is strategically expensive to own.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Air gapped open source and the secure but stale paradox appeared first on e27.

Posted on Leave a comment

If you have the will, you’ll have the skill

Most teams are still using AI the same way: ask ChatGPT, get an answer, share it around, repeat. It’s a loop.

I did the same for over a year, but AI has become a lot more than that.

Early this year, something changed how I work, and how ourteam (an AI recruiting platform focused on automating candidate screening and evaluation) works, entirely.

I started my career 16 years ago in management consulting. My first task was creating PowerPoint slides. I remember drawing a line that was always bent. Then, my senior taught me a keyboard shortcut (holding the Shift button when dragging the mouse). That became my first ‘skill’ at work.

Today, that word rings very differently.

A ‘skill’ in AI is a text file (commonly known as a markdown file). It contains a set of plain-text instructions that tell the AI what to do consistently every time. It can be built in minutes. It can learn and repeat what takes months and years.

I grew up with the rise of the internet, mobile and cloud. I believe those were critical shifts to get us to where we are today. Yet, this feels different.

AI isn’t just a thinking partner anymore; it’s becoming the ‘system’ teams actually run on.

I have zero technical background, I can’t code, but lately I’ve been managing a team of AI agents that handle our daily work and build our software.

These days, ourteam and I are operate with AI agents daily. These agents now catch errors, review work, and ship updates on their own. The team just directs. If you told me this a year ago, I wouldn’t have believed you.

(An ‘AI agent’ is an AI-powered entity that can take actions on its own. It reads files, writes code, sends emails, and runs tests. You give it a goal, and it figures out the steps.)

Also Read: The rise of AI agents in healthcare: Designing man-machine systems

Which brings me to the point of intelligence.

In knowledge work, we tend to associate intelligence with execution: knowing how to draw that straight line on PowerPoint, making charts and models on Excel, designing prototypes, programming software, and more.

Those were skills we took months and years to learn. Today, you can create a skill in 5 minutes.

Most of the intelligence work today can be done faster, cheaper and often better by AI systems. If your use of AI is still just prompting ChatGPT back and forth, there’s actually a lot more out there.

AI models and applications have become so good that real, serious work can be done reliably and consistently. You give it an outcome, and it handles the steps.

That’s the shift most people are underestimating.

Now, the hardest part isn’t handing over execution to AI. It’s everything that comes before and after.

Judgement. Taste. Standards.

Knowing what’s good or bad. What feels right. What should be shipped and what shouldn’t. Those decisions are still on us.

Spending time to think and write, I believe, is one of the most underrated practices left.

I started as a tech enthusiast. Today, I’m a heavy Claude user (Claude-pilled as they say).

From simple chat to using it as a coworker. Now, I’m deep into Claude Code, building and shipping things through what people call “vibe coding”. (And yes, I cancelled my ChatGPT subscription, but that’s a separate story.)

The strange part is this: the more you learn, the more you work.

AI expands what’s possible, so you end up doing more. Anyone actively building with it will tell you the same.

Also Read: AI agents didn’t change how I write, they changed when I could start publishing

On X, there’s a fast-moving community debating AI models, workflows, and sharing best practices: Claude Code vs Codex, agent workflows, open-source tools like OpenClaw.

We used to wait excitedly for Apple Keynote once a year. Now, exciting breakthroughs happen every few days.

Inside ourteam, we’re constantly learning and applying new concepts: agent teams, auto-research, self-healing systems, internal LLM wikis, second brains. These aren’t just “new features”; they add up to a fundamentally different way of working.

We used to identify ourselves singularly: an engineer, a salesperson, a product manager, a customer service manager. What if today we can be all that and more? It is no longer an ‘if’, but ‘when’.

If you have the will, you’ll have the skill.

I was taught to unlearn and relearn. And here I am. It feels weird at times, but it’s super exciting.

Hopefully, this inspires you too.

This article was first published here.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post If you have the will, you’ll have the skill appeared first on e27.