i10X co-founder Patrick Linden
AI models are getting smarter by the week, but for most teams, the real struggle isn’t intelligence, it’s fragmentation. Too many tools, overlapping subscriptions, messy workflows, and no clear way to know what actually works.
In Part 1 of e27’s interview with Patrick Linden, co-founder of Iterative- and Antler-backed i10X, which enables users to access the world’s most powerful AI models through a single platform, we unpack why the Singapore-based startup is betting on a neutral AI meta-layer instead of building yet another model, how it crossed 100,000 users at pre-seed, and why retained, habitual usage — not hype — defines success in the next phase of AI adoption.
Excerpts:
AI tools are exploding everywhere. Why did you believe now was the moment to build a meta-layer instead of yet another model or agent?
Models are improving rapidly, but teams still lose time on the basics: identifying what works, integrating tools, and paying for too many overlapping subscriptions. We felt the bigger bottleneck was fragmentation, not raw intelligence.
Also Read: i10X nets US$1M to unify the world’s leading AI models
So we focused on a unified AI workspace: discover the right model or agent, use it in one place, and then connect agents into workflows as the next step. That’s also why our roadmap starts with Discovery (live) and moves to orchestration (what we call the “Agent Graph”) next, instead of trying to out-model the model providers.
At what point did you realise fragmentation, not intelligence, was the real bottleneck in AI adoption?
People weren’t blocked by “AI can’t do it”, but were blocked by:
- “Where do I even start to find what’s out there?” (too many agents, no trusted guide)
- “Which of the agents/tools actually works specifically for my needs?”
- Tool chaos and context switching
- Cost sprawl from lots of subscriptions.
Even the best models don’t help if the workflow is messy. That’s the problem we built around.
This is when realised fragmentation was the real bottleneck.
Many platforms aggregate models. What’s the most complex technical or operational problem i10X has solved that outsiders underestimate?
Aggregation is not the hard part. The hard part is making an all-in-one AI workspace really useful and reliable at scale, which includes:
- building out reliable cross-model and cross-agent memory so users don’t have to repeat context
- keeping the catalogue usable as the number of agents grows
- learning from real usage, which agent works best for a task in practice
- dynamically orchestrating the right agents/tools in our upcoming workflow engine.
If OpenAI, Anthropic, or Google were to build a unified workspace tomorrow, what would still protect i10X?
Two things: neutrality and breadth. i10X is built to help users pick the right agent for the job, not to push a single vendor’s ecosystem. Our ideal customer profile is SME owners, power users, founders and freelancers. Their goal is to find the best AI for a specific task, say a customer support chatbot or an AI sales development representative. They don’t really care in which ecosystem the solution lives. i10X sits as a neutral metalayer above individual ecosystems.
Also Read: The AI-first era: Why the model is the new runtime and how Asia can lead
Compounding usage data: Every task run in i10X teaches the Agent Graph what works best in which environment. That feedback loop improves results over time and is hard to copy without the same usage.
You’ve crossed 100,000 users at pre-seed. What metric matters more to you right now than user count? How much of that usage is habitual versus experimental, and what signals tell you i10X has become ‘mission-critical’?
We’re experiencing exponential growth in all key metrics across the board, including our paid user base. User count is nice, but we focus more on retained usage and repeat behaviour.
The metric that matters most is how many users return weekly and complete real tasks, not just browse. We look for signals like:
- repeat sessions without prompts from us
- users relying on i10X as their default place to try and run AI
- paid users staying even after the initial “AI exploration” phase
teams consolidating multiple tools into one subscription.
What’s the most common reason users churn, and what does that reveal about the current limits of AI workspaces?
Our current churn rate is within industry standards, in line with the foundational model providers. It has improved by three times compared to where we started six months ago, mainly thanks to a laser focus on rapid improvements on the product side.
The most common churn is simple: some users come in expecting a single tool to instantly solve a vague problem. If they don’t have a clear use case, they won’t stick.
What that reveals about the current limits of AI workspaces is quite clear: the gap isn’t intelligence, but rather operational reliability – fast discovery, predictable outcomes, and an easy path from experimentation to daily use.
At US$25 a month, i10X is aggressively priced. Is this a wedge strategy or a long-term pricing belief? How do you prevent becoming a ‘thin-margin middleman’ between powerful AI providers and end users?
At US$25 per month, the intent is straightforward: make i10X an everyday “default tab”, not a high-friction procurement decision. So yes, it’s a wedge in the sense that it lowers adoption friction, but it’s also a long-term belief that AI should be priced like infrastructure, because it’s becoming part of daily work.
Also Read: The AI revolution in emerging markets: Local models, global impact
The “thin-margin middleman” framing doesn’t really fit i10X, because we’re not selling access to someone else’s product; we’re building the AI workspace where the user lives. The value compounds at that layer:
- We own the user relationship: users start with i10X to get work done; providers sit behind the interface.
- All in one space: discover, use, and orchestrate agents in a single workspace: one login, one subscription.
- Discovery that’s actually trusted: finding what works across all agents without trial-and-error.
- The Agent Graph: every task run improves indexing and recommendations, so results get better with usage.
- Workflow orchestration next: moving from “individual agents” to running repeatable multi-agent workflows.
Pricing stays simple: plans scale with credits and depth of access, and for B2B, we have seat-based pricing. The goal is strong unit economics from retention and consolidation, not from marking up tokens.
The post AI’s biggest bottleneck isn’t intelligence but fragmentation: i10X co-founder appeared first on e27.
