Posted on

It’s time to reshore: Why AI-augmented development changes the equation

2.5 days. Three brands. Three locales. Nine languages. One person — and Claude.

I built a custom e-commerce platform covering Singapore, Hong Kong, and Japan. Different business rules, offerings, and compliance requirements for each market.

The team? Me, Claude Opus 4.5, and Claude Code — multiple instances running in parallel.

What would this have taken with an offshore team?

The hidden tax

For decades, the dominant model for software development looked like this: Product managers sit near the business. They write detailed specs. Those specs get shipped to a large development team — often offshore in the Philippines, Vietnam, Indonesia, and India. Each developer gets a well-defined slice. They implement exactly what’s described. Ship it back.

The economics seemed compelling. Local engineers cost more. Offshore developers cost a fraction. Scale up the team, ship the specs, get the code back.

But much of the logic rests on a fallacy Fred Brooks identified in 1975 in his classic software text: if it takes a woman nine months to have a baby, can nine women have a baby in one month?

Brooks’ The Mythical Man-Month taught us that adding people to a late software project makes it later. The corollary: throwing bodies at software development doesn’t scale linearly. Communication overhead grows exponentially with team size.

And yet the offshore model doubled down on exactly this fallacy — betting that cheap labour would compensate for coordination costs.

It doesn’t.

A systematic literature review of offshore software development identified 18 problem areas. But they all boil down to, in one form or another, the bandwidth, latency, and cost of communications.

What you save in labour, you lose in agility and velocity.

Also Read: Why Singapore startups are sleeping on their secret weapon (spoiler: it’s not AI)

What we built

Let me be transparent: nothing we — me, Claude Opus 4.5, and Claude Code — built was groundbreaking. It’s foundational platform work — the kind of thing that’s been sitting in someone’s product backlog “for way too long.”

A multi-brand, multi-locale e-commerce platform supporting Singapore (English, Mandarin, Malay, Tamil), Hong Kong (Simplified Chinese, Traditional Chinese, English), and Japan (Japanese, English). Each locale with its own business rules and compliance requirements.

But this platform enables what comes next:

  • AI Agent for Pre-Sales Consultation on features and benefits
  • AI Agent for Pre-Appointment Consultation — gathering the information that matters before a customer meets with a representative and making that appointment for the face-to-face meeting
  • AI Agent for Customer Support and Success
  • A next-generation customer database

We laid out 14 sprints. Cleared 7 in 2.5 days. Halfway there — and tracking toward something I’m excited to show in the New Year.

What’s coming is harder. But the foundation is solid.

And that foundation? A two-pizza team without AI would have taken at least a week. Maybe longer.

The workflow

A solid workflow was what made this possible — and it started with augmentation at the meta level.

I’ve spent months developing an AI-Augmented Writing workflow. Project instructions that evolved from three sentences to over 2,000 words. A system of editorial calendar chats, handoff briefs, and sprint-style execution.

To bootstrap the development workflow, I took those writing instructions into a new Claude Project in a chat dedicated to creating and evolving project instructions for AI-augmented Development. Then I worked with Claude Opus 4.5 to identify what was different between writing and coding.

The gap was smaller than I expected. The same patterns that make AI-augmented Writing effective — sharp instructions, short time horizons, iterative refinement, clear handoffs — translate directly to development.

Call it augmentation, augmenting itself. I used AI collaboration to build a better system for AI collaboration. Yes, this might be a bit too meta, but then I’m a nerd, and the results speak for themselves.

The workflow:

  • Project-instructions chat: Establish the workflow and working agreements — bootstrapped from my writing system
  • Product-definition chat: Lay out the vision, analyse existing platforms, generate a product spec
  • Mission-control chat: Break the spec into sprints, create handoffs for each sprint, coordinate the whole, and keep track of everything (including reminders for me)
  • Sprint chats: Individual chats for each sprint, feeding work to Claude Code

But here’s what made even greater velocity possible: parallel workflows and automated validation.

I used Claude Chrome — which Anthropic opened to all paid plans on December 18 — with shortcuts to automate the analysis phase. Nine websites across three brands and three markets, each captured and audited automatically. Those audits fed directly into the product definition.

I had separate chats for mundane automation: translation verification, repetitive code generation, quality gates, and testing. And I used Claude Chrome integrated with Claude Code to validate work in the browser — catching errors, verifying before the next sprint.

Original expectation: one sprint per day. Blindingly fast compared to non-AI-augmented teams.

Actual result: seven sprints in 2.5 days. Framework running locally in two hours. Up and running in the cloud with CI/CD pipeline the same day. With time and bandwidth to tackle even more.

The formation that emerged:

Jordan
(orchestrator)
    │
    ▼
Claude Opus 4.5
(planning + coordination)
    │         │
    ▼         ▼
Claude Code #1   Claude Code #2
(core features)  (content/polish)
    │         │
    ▼         ▼
Working Software

While one Claude Code instance worked, I planned the next sprint. Claude Opus 4.5 tracked time on tasks and prevented conflicts. After every sprint, a quick retro is conducted to revise the project instructions.

I’ll publish a detailed breakdown of the Claude Chrome workflow next Tuesday, December 30 — the shortcuts, the automation patterns, and what I learned.

Also Read: Why Asia sits at the centre of the global AI chip disruption?

The cost

I used to be a Claude Pro subscriber at US$20/month. That was sufficient — until Opus 4.5.

To get the capacity I needed for this kind of intensive work, I upgraded. First to Max 5x at US$100/month. Then to Max 20x at US$200/month.

US$200/month for what would have been weeks of a distributed team’s time.

Have the ups and downs of Claude’s stability the last few days been frustrating? Sure. But I’ve worked through them and am still moving fast.

The fluency insight

Here’s what struck me: we’ve been applying the wrong framework. With people, not just AI.

For decades, we tried to “automate product development” by sending work wherever labour is cheaper and plentiful. Directed Contribution work — well-defined tasks in unambiguous contexts — shipped over the wall.

The result is what we see when we have AI write for us instead of writing with AI. Slop.

And more slop.

We were using the fluency framework wrong.

In my previous piece, I described three modes of contribution:

  • Directed Contribution: Under someone’s guidance, executing well-defined tasks in unambiguous contexts
  • Independent Contribution: Operating autonomously, first in well-defined situations, then in ambiguous ones
  • Working through others: Setting vision and direction, guiding others toward outcomes

The offshore model was built on Directed Contribution — the work AI now handles.

But software development requires augmentation and collaboration. Human-AI collaboration (and human-human collaboration). High-bandwidth communication. Real-time problem-solving. The ability to clarify the problem space as you go.

You can’t do that across communication gaps — whether those gaps are time zones, organisational silos, or oceans.

What this means for Southeast Asia

The offshore model that built much of SEA’s IT services industry is dying.

Vietnam produces 50,000 IT graduates annually. Over 45 per cent of its developer workforce operates at the junior level — trained to do Directed Contribution work. The Philippines has built a massive tech services industry on similar foundations.

The question for the region isn’t whether AI will disrupt this model. It already is.

The question is whether Southeast Asia can compete on value, not volume.

Can the region produce engineers who operate in Independent Contribution mode? Engineers who understand the What and Why, not just the How? Engineers who can be part of elite, co-located teams — whether those teams sit in Singapore, Jakarta, Ho Chi Minh City, or alongside clients in Tokyo, Sydney, or San Francisco?

The opportunity isn’t to fight the transformation. It’s to ride it.

Small teams. Co-located. High-bandwidth. Fully exploiting AI augmentation.

For this project, I was flying solo, but I built what would have taken a distributed team weeks, or a co-located two-pizza team at least a week. This is solo development — team workflows are still being invented. But it proves the thesis: a small, focused team that can understand and clarify the problem space in real-time can build and deploy at a velocity that human waves cannot match.

The equation has changed

Labour cost arbitrage no longer compensates for the collaboration tax.

The hidden costs — communication latency, coordination overhead, the telephone effect, revision cycles — matter more when AI handles the Directed Contribution work that justified the offshore model.

It’s time to reshore. Not to human waves in different locations. To small, AI-augmented teams that can think, iterate, and ship.

The question is whether you’re building the team that leads, or the team that gets left behind.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image generated using AI.

The post It’s time to reshore: Why AI-augmented development changes the equation appeared first on e27.