
AI coding tools turned software output into a speed story. A developer can sketch a product in the morning and push a working build before dinner. That is why vibe coding spread so fast, even as security researchers warned that AI-generated code can widen software supply chain risk.
The part most people missed sits behind the prompt box. In many AI coding stacks, code, prompts, and usage data can pass through outside platforms, cloud infrastructure, or model-provider systems.
For a startup hacking on a landing page, that may feel tolerable. For a bank, fintech, or fund, it can open a path to IP loss, audit trouble, and valuation damage.
An alarm bell from his own workflow
The concern started from a personal place. I had been using AI coding tools on quant trading systems, then realised the privacy settings behind those tools deserved a much closer look. This is my life’s work. How am I supposed to feel about this?
One example of that concern reflected in policy is Cursor’s data-use page. It says that if Privacy Mode is turned off, it may use and store codebase data, prompts, editor actions, code snippets, and other code data to improve features and train models. Requests still pass through its backend, even when a user brings an API key of their own.
The rules also change depending on which product is in the chain. OpenAI states it doesn’t train on business data by default, and Anthropic shares the same for its commercial products. Consumer products and third-party access follow separate terms, which leaves enterprises sorting through a patchwork of settings, vendors, and responsibilities.
Also Read: Can you build an app without coding? My experiment might surprise you
Why this hits finance harder
A code leak is not just a developer problem in regulated sectors. A financial codebase can hold client identifiers, internal controls, pricing logic, fraud rules, risk models, and trading strategies. Put differently, source code carries business logic, internal workflows, architecture decisions, and years of engineering work. Once it leaves a company’s control, the damage can spill into customer trust, due diligence, compliance, and enterprise value.
90 per cent of security professionals say insider attacks are as hard as or harder than external ones to detect; 72 per cent of organisations still cannot see how users interact with sensitive data across endpoints, cloud apps, and GenAI platforms.
And that pressure is meeting a tougher legal climate. 2025 marked the move from AI hype to AI accountability, with regulators in the U.S. and EU shifting toward enforcement and compliance deadlines. In Europe, the Digital Operational Resilience Act makes clear that financial entities remain fully responsible for their obligations, including when ICT services are outsourced.
Visibility is also getting worse as AI systems touch more of the workflow. Only 21 per cent of organisations maintain a fully up-to-date inventory of agents, tools, and connections, leaving 79 per cent operating with blind spots. Nearly 40 per cent of enterprise AI interactions now involve sensitive data, including copied text, pasted content, and file uploads.
What’s the pitch to non-technical executives
Let’s frame the risk in business terms. Using AI means that you are sending data to whoever is providing the model, or the platform, and potentially also whoever is providing the infrastructure.
The big question for executives, in my view, is whether they are comfortable with that chain seeing, storing, or learning from their most valuable data.
Also Read: From chatbots to vibe-coding: 3 AI experiments that changed my investment strategy
The answer changes fast here. Financial and regulated firms can’t afford the ‘move fast and break things’ approach that many AI tools implicitly encourage. More often now, regulators, buyers, and internal security teams want a clear record of where data went, who touched it, and what evidence exists afterwards.
The next premium in AI: Controlled execution?
The market has already rewarded speed. The next premium may go to platforms that keep the speed and clear security review, also giving compliance teams evidence they can stand behind. That is a finance story as much as a tech one, because budgets, contracts, due diligence, and enterprise value tend to follow tools that reduce uncertainty instead of adding another black box.
AI can clearly write code. But where does that code travel? Who can inspect the path? What proof is left behind when the work is done? Those are the sharper questions for boards, CFOs, CISOs, and investors.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.
The post The hidden cost of AI coding: Why proof will matter more than prompts appeared first on e27.
