Posted on

Can AI romance fix language learning? Hyperbond believes so

Hyperbond Studio co-founders Jack Vinijtrongjit and Shawn Tan (R)

As AI-native language platforms race beyond flashcards and drills, Hyperbond Studio’s Call Me Sensei is betting that engagement, not curriculum, is the true unlock. Blending character-driven AI, memory systems, and relationship mechanics, the Singaporean startup is reimagining language learning as an emotionally immersive experience rather than a structured syllabus.

The AI startup — founded by Shawn Tan (who is also General Partner at TRIVE Ventures) and Jack Vinijtrongjit — recently raised US$500,000 from investors, including NLS Ventures, Loyal VC, and Attribute Global Ventures, for its innovative platform.

In this interview, Tan breaks down the thesis, technology, safeguards, localisation strategy, and business model behind their unconventional approach.

Edited excerpts:

What core problem is Call Me Sensei solving, and what evidence convinced you engagement is the main bottleneck?

Language learning itself is not a new problem. However, most language products start from the assumption that better outcomes come from better curricula (e.g. more structured lessons, more innovative drills, better sequencing). This leads to experiences that feel like work, resulting in high attrition and extremely low long-term retention across the category.

Also Read: Edutech war: How NativeX is taking on the likes of ELSA, Duolingo in Vietnam

Platforms like Duolingo illustrates the limitation of this approach. While it has succeeded as a product, much of its engagement is driven by external mechanics, such as streaks. Many users return daily to protect a streak rather than because they enjoy learning or can communicate fluently. It’s not uncommon to see users with 500-day/1,000-day streaks who still struggle to hold a basic conversation. The system optimises for habit formation, not sustained, intrinsic motivation to communicate.

We take the reverse approach. Instead of starting with “what should someone learn today?”, we begin with “what would someone enjoy doing, voluntarily, for 20 to 30 minutes?” We design an emotionally engaging experience first, and let language learning happen as a byproduct. This makes users want to return and spend more time practising the language.

How do you define and measure “learning” in Call Me Sensei, and will you run studies against Duolingo, Babbel, or human tutors?

Given our approach, we deliberately do not define learning through a fixed curriculum or prescribed outcomes. There is no roadmap, syllabus, or linear progression. Learners decide what and when they want to learn — buying groceries one day, ordering coffee the next — based on immediate interest and context.

As such, we do not “prove” learning through test scores or completion rates. Instead, we focus on retention, session frequency, and time spent as leading indicators. We hypothesise that a learner who voluntarily spends more time speaking and listening will ultimately learn more than one who follows an optimal curriculum and then abandons it.

What makes Call Me Sensei meaningfully different from a generic LLM chat with prompts, characters, and memory?

Call Me Sensei is built on a proprietary AI architecture designed for character-driven, relationship-based interaction, not generic assistance.

Generic LLMs are optimised to be helpful and agreeable. Our system is designed to produce human-like personalities – characters with consistent traits, emotional reactions, memory, and boundaries. Responses are not just linguistically correct; they are situationally and emotionally grounded.

In addition, the experience is conversation-first and embedded within structured scenarios and relationship states. Conversations evolve over time, shaped by past interactions, rather than resetting each session. This creates continuity, emotional stakes, and a sense of progression that cannot be replicated by prompting a general-purpose chatbot.

How does your memory system work, what does it store, and how do users control or delete it without reinforcing unhealthy dynamics?

Memory helps make interactions feel coherent and personalised, but it’s designed with clear limits. Each sensei has a defined personality that influences what they tend to remember—for example, learning preferences or recurring topics while avoiding unnecessary or overly personal data.

Memories are stored in a controlled and privacy-conscious manner and are meant to support learning continuity, not permanence. Users can reset interactions or delete their account at any time, and we intentionally design memory systems to allow for change over time so users aren’t locked into past behaviour, mistakes, or emotional states.

This ensures the experience remains flexible, age-appropriate, and supportive rather than prescriptive or restrictive.

What safety guardrails are in place for romance mechanics, sexual content, manipulation, minors, and self-harm scenarios?

The app is designed for users aged 13 and up, with all romantic mechanics strictly non-sexual and framed around age-appropriate, consent-based interactions. Safety is a core requirement at every level of the experience. We apply strict age-appropriate guardrails around sexual content, harassment, manipulation, and power-imbalanced dynamics.

Interactions are evaluated on a per-message basis using automated systems designed to prevent inappropriate content and to discourage emotional dependency, exclusivity, or coercive behaviour. The system is also designed to respond to signs of distress or self-harm by redirecting conversations and encouraging users to seek trusted external support.

Also Read: Training Gen Z: Why gamification is their language of learning

While conversations are end-to-end encrypted to protect user privacy, we use privacy-preserving safety signals and extensive internal testing to ensure policies are consistently enforced as the product evolves.

What are the key cost drivers of running relationship-based AI at scale, and how will you protect gross margins?

We are not able to share unit economics at this stage. What we can say is that emotionally rich, voice-first AI experiences are computationally expensive, and cost discipline is a core part of our technical design and roadmap as we scale usage.

How do you localise scenarios culturally beyond translation, and who validates tone, taboos, and context across languages?

We do not treat localisation as simple translation. For each language, we work with native speakers and cultural reviewers to ensure interactions feel natural rather than generic. The system is built to support culturally distinct narratives, not one global template reskinned across markets.

Is this a tutoring product, entertainment subscription, or hybrid — and what monetisation model and metrics will guide scaling revenue

Call Me Sensei is intentionally a hybrid. Education defines the outcome; entertainment drives engagement.

The product follows a freemium model for consumers, with premium subscriptions and in-app purchases over time. In the long term, we also see opportunities in B2B partnerships. Monetisation will scale in step with engagement; our priority is first to build something users genuinely want to return to.

The post Can AI romance fix language learning? Hyperbond believes so appeared first on e27.