
I’ve spent 15+ years building across multiple tech ventures and cultures—starting in Vietnam, sharpening my craft in Japan and Singapore, then expanding to the US, Australia, and Europe. Each stop taught me how different ecosystems turn constraints into capability: how to ship product under pressure, build companies from zero, grow talent pipelines, and lead teams through the hardest execution challenges. Along the way, I co-founded ventures across domains—from cloud content security and AI-driven fraud detection in finance to AI-powered talent vetting and AI-powered graphic design and marketing.
That journey left me with a simple conviction: AI is fundamentally changing how we build software, how we build companies, and how we build the skills to operate at a new level of business innovation. The shift is so deep that founders and SME owners must rethink how they imagine products, platforms, and transformation—or risk shipping the right features on the wrong foundations. This is why I’m sharing what I’ve learned about building AI-first products and AI-first companies now.
The way we built the software product in history, now, and next
Before 2000: The PC/OS era — “software in a box.”
- What it looked like: You bought a CD, installed a program on your Windows or Linux computer, and used it on that one machine.
- Where the work happened (“runtime”): On your personal computer.
- How updates worked: Rare and manual—new CD, new installer.
- Everyday example: Installing Microsoft Office from a disc.
- What this meant for builders: Ship a product once, hope it works on many different PCs, and plan big, infrequent upgrades.
2000s–2020s: The Cloud/SaaS era — “software in the browser.”
- What it looked like: You visited a website, logged in, and your app just worked—anywhere, on any device.
- Where the work happened: In big, remote data centres (“the cloud”).
- How updates worked: Continuous and invisible—features improved without you doing anything.
- Everyday examples: Gmail, Salesforce, Shopify.
- What this meant for builders: Design for millions of users, run on elastic servers, charge subscriptions, and ship improvements weekly.
Now: The AI-first era — “the model is the new runtime.”
- What it looks like: You tell the system what you want in natural language (“Close last month’s books and flag anything unusual”), and it figures out the steps—pulling data, calling tools, checking rules—then asks for help only when needed.
- Where the work happens: In an AI model that plans and coordinates actions across your systems. Think of the model as the place where decisions get made before tools are used.
- How updates work: Not just new features—better reasoning, safer behaviour, and lower cost per task as models, prompts, and policies improve.
- Everyday examples:
- A support “assistant” that reads past tickets + policy, drafts the best reply, and only escalates tricky cases.
- A finance “copilot” that reconciles invoices, highlights anomalies, and prepares a month-end summary with sources.
- A logistics “agent” that spots late shipments, calculates SLA risk, drafts messages to customers, and logs everything.
- What this means for builders: Interfaces become language, services act like agents (software teammates) with tools and memory, and operations becomes LLMOps—you manage AI quality and safety the way you manage uptime and security.
Also Read: How e27 and Findy brought engineering leaders together in Singapore
What actually changes under the hood
- From clicks to conversation: Yesterday, we clicked buttons. Today, we describe goals in plain language. Software translates those goals into steps.
- From apps to agents: Yesterday, apps waited for you to click. Today, agents can plan tasks, call your CRM/ERP/payment systems, and report back with an audit trail.
- From “it works” to “it works, is safe, and proves it”: We add guardrails (safety checks), evals (quality tests), and rollbacks (easy undo) so the AI stays helpful, polite, and compliant.
- From bigger servers to smarter placement: Some AI runs in the cloud; some runs on the device/at the edge for privacy and instant response (stores, warehouses, field teams).
A quick cheat sheet
- Model (LLM): The AI brain that understands your request and decides the next steps.
- Runtime: Where the real work happens. It used to be your PC, then the cloud; now, the model’s planning/execution is part of that “where.”
- Agent: Software that can act—plan steps, call tools, remember context, and ask for help when unsure.
- Tools: Your existing systems are exposed as safe actions (e.g., “CreateInvoice,” “GetShipmentETA,” “CheckKYC”).
- Memory: Short-term and long-term context, so the agent doesn’t forget what just happened or what’s true for your business.
- RAG (retrieval) => Agentic RAG: Letting the AI “look up” your documents/policies so answers come with sources, not guesses.
- LLMOps: The day-to-day discipline of running AI in production—tests, monitoring, safety checks, and quick rollback when quality dips.
- SLA (service level agreement): Your quality promises, now expanded beyond “uptime” to include “accuracy,” “latency,” and “cost per task.”
Founder/SMEs takeaway
Moving from OS → Cloud → Model-as-Runtime isn’t another feature cycle—it’s a mindset change. If you keep thinking in old categories (screens, clicks, tickets), you’ll bolt AI on top of yesterday’s product. If you think in goals, agents, tools, guardrails, and proof, you’ll design AI-first products and AI-first companies that actually move the P&L.
That’s the shift—and why it matters now.
Also Read: Deeptech’s secret: Ignore the market, master the engineering, and let opportunity find you
Why this moment belongs to Asia’s founders and SMEs
Southeast Asia used to pay a “complexity tax”: many languages, uneven infra, fast-shifting rules. Agentic AI flips that from handicap to advantage. If you already know the domain—freight, clinics, F&B, construction, retail finance—you can translate that know-how into AI-first products and operations faster (and cheaper) than at any time in the last 20 years.
Large enterprises are retooling too, but they move with more friction; that’s your window. (Even management consultancies are telling their clients: agentic AI requires a reset of the transformation approach.) . You’re closer to an AI-First business than you think. Agentic AI lets you describe outcomes in plain language, wire those outcomes to your existing tools, and keep humans only where judgment truly matters.
What shifts in your favour
Go global from day-one
- Language-first products: Ship onboarding, support, and docs in Vietnamese, Bahasa Indonesia, Thai, Tagalog, and English on the same release. Build Digital Sales Agent support client 24/7 with any languages
- Policy packs by market: Agents apply country/province-specific rules (KYC, tax, data) and keep an audit trail—so cross-border isn’t a cliff, it’s a checklist.
10× Productivity—with smaller AI driven tech team
- Agents as operators: They plan steps, call your CRM/ERP/accounting tools, and escalate only on edge cases.
- Where it bites (and pays): KYC throughput, catalog enrichment, late-shipment comms, AR collections, month-end close—measured in hours saved and error rates dropped.
Strategy-grade insight at a fraction of big-four consulting cost
- Boardroom analysis, on tap: Market maps, comps, unit-economics scenarios, pricing simulations—drafted from your data so you spend real consultants on judgment and deals, not spreadsheets.
New business models you can actually run
- Outcome-as-a-Service: Sell verified outcomes (e.g., “cleared invoices,” “verified onboardings,” “recovered carts”) with SLAs, not just software seats.
- Vertical agents: Package your domain playbooks (“clinic intake,” “factory maintenance,” “freight exceptions”) and license them usage-based.
- AI-enabled franchises: Combine your process IP with agents, brand, and training; replicate city-by-city without head-office bottlenecks.
CapEx → OpEx, and cost per task becomes your lever
- You mix hosted AI APIs, open-weight models (when your data differentiates), and small on-device models for privacy/latency. You measure cost per completed task like COGS—and tune it down month by month.
Also Read: With AI comes huge reputational risks: How businesses can navigate the ChatGPT era
The mindset that unlocks it
- Domain first, tooling second. Your industry know-how is the moat; AI is the amplifier.
- Outcomes over features. Ask, “What result am I selling?” not “What screen should I build?”
- Proof beats promise. If it doesn’t show sources, acceptance criteria, and an audit trail, it isn’t ready for customers.
- Iterate in public (with customers). Month-over-month improvements in cost per task and first-pass yield are your real marketing.
What we’ve learned building with customers (and what I’d keep)
At DigiEx Group, we built the company as a tech talent hub + startup studio because that’s what our region needs: deep AI-Powered engineering paired with product thinking, LLMOps discipline, and localisation. We’ve shipped cross-border onboarding that explains its decisions, catalog ops that self-clean, and logistics agents that detect SLA risk and draft multilingual comms—always with human escalation and an audit trail.
Across wins and misses, a few lessons keep paying rent:
- Mindset over tools: The hardest part wasn’t the tech, or teaching employee how to use tools — it was helping every team member think differently. Change management, open communication, and change old habits to reimagine what was possible.
- Focus on high-impact first: Instead of applying AI everywhere, we prioritised areas where it could deliver the greatest impact — whether in speed, decision-making, or innovation. Then learn, make standardise and scale from there
- Automate with intention: Not every workflow needs AI. We asked: Does it enhance quality? Speed things up? Enable better decisions? If not, we left it out.
- Safety as muscle memory: Mask PII before prompts, keep sensitive data in-region, design reversible actions, and run SRE-style incident reviews: root cause → guardrail update → new test. (Yes, agents can fail; design so failures teach.)
- Ship a lighthouse workflow in 30–60 days: Pick the ugliest, most measurable pain. Baseline it; ship an agent with guardrails; publish the delta. Momentum beats theory.
So — Why now, why Asia
If the last two decades were Cloud-first, the next decade is AI-first—and that doesn’t just mean new features. It means a new way to build: the model as runtime, language as interface, agents as services, and LLMOps as the production discipline. Companies that internalise this shift won’t just ship faster; they’ll operate differently—measuring quality, cost per task, and trust with the same rigour we once reserved for uptime.
Asia—especially Southeast Asia—is built for this moment. We’re multilingual by default, comfortable with constraints, and close to real customers and real operations. That combination turns agentic AI from a buzzword into Tuesday-afternoon wins: onboarding that explains itself, catalogs that self-clean, logistics comms that happen before the complaint.
And for non-technical founders and SME owners with deep domain knowledge, the door is finally open. You can go global from day one, get 10× productivity where it hurts, and access strategy-grade insight at a fraction of old consulting costs.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.
Image courtesy: DALL-E
The post The AI-first era: Why the model is the new runtime and how Asia can lead appeared first on e27.
