
I was sitting with a founder in Singapore last month who had just rolled out a generative AI assistant into their fintech product. They were proud of their “Responsible AI” page, complete with a model card, an explainability statement, and a glossy diagram about bias mitigation. But then they said something that caught me off guard:
“Users still don’t trust it. They open it, play for 10 seconds, and switch it off.”
And there it was — the core tension. We think we can explain our way into trust. But in practice, no amount of words will save a user who feels powerless.
This is where I take a contrarian stance: Trust is not a document. Trust is a design choice that gives users the ability to change outcomes.
Let’s unpack this through a story arc — from tension, to examples, to lessons learned, and finally to a guiding principle.
Act one: The illusion of transparency
In Southeast Asia, AI adoption is booming — from ride-hailing to e-commerce to digital banking. Companies are racing to launch AI-enabled features, but the trust playbook still looks like 2018: privacy policies, explainers, and “we take your data seriously” banners.
Take a look at major platforms’ trust dashboards: they are informative, yes, but fundamentally static. They tell users what has been done, not what can be undone.
And users are savvy. They’re less impressed by paragraphs of compliance language and more interested in the one question that matters:
“What happens when the AI gets it wrong — and what can I do about it?”
This is the gap few startups address — and where the opportunity lies.
Act two: A marketplace learns the hard way
Consider Southeast Asia’s bustling peer-to-peer marketplaces. Platforms like Carousell have invested heavily in scam detection, behavioral analysis, and community education campaigns. In 2024, they even published a regional scam trends report and multi-layered security updates to show transparency.
Yet long-time users still push for stronger verification, escrow payments, and platform-level guarantees.
The lesson? Disclosure is not enough. People don’t just want to know there’s a risk; they want to shift that risk away from themselves.
The contrarian move is to treat risk-shifting as a product feature. Build escrow into the workflow — like Carousell Protection’s rollout — show transaction risk scores, and set a clear “platform eats the loss if…” rule. Suddenly, trust becomes something users can spend.
Also Read: Southeast Asia’s trade future: Powered by tech, trust, and regional unity
Act three: Regulation as a product spec
Startups often view regulation as a hurdle. In reality, in Southeast Asia it can be your design brief.
Singapore’s Model AI Governance Framework and its new Generative AI consultation draft are explicit: operationalise accountability, test for safety, be explainable. Indonesia’s PDP Law, Thailand’s PDPA, and Malaysia’s 2024 amendments all tighten cross-border data requirements.
Instead of treating these as compliance checklists, turn them into architectural features:
- Data locality tiers: keep “must-stay” data in-region with edge inference, while allowing “can-mirror” data to sync globally.
- Consent receipts: issue machine-readable receipts for every transfer, so users can see and revoke.
- Cross-border off-switch: design for a one-week pivot to local storage if adequacy rules change.
This is not hypothetical. ASEAN regulators are moving from principles to testable standards, such as Singapore’s AI Verify framework — a world-first testing toolkit. Designing for “provability” is how you future-proof trust.
Act four: Limits, ladders, and graceful failure
Telling users “the model may be wrong” is a disclaimer. Giving them a ladder out of a bad decision is trust.
Think of three concrete product choices:
- Graceful degradation: If your model confidence is low, switch to human review or rules-based logic — and show that fallback state.
- Appeal & reversal SLA: Make appealing a wrong decision a two-tap process, and commit to a resolution time. If you’re wrong, compensate automatically.
- Evidence kits: Pre-pack the screenshots and logs a user needs to challenge a decision — don’t make them guess.
These are not just good UX; they are the new trust currency.
Act five: The hidden risk — consent debt
Here’s a concept that doesn’t get enough airtime: consent debt.
Just like tech debt, you accrue consent debt when you quietly expand data usage without granular, revocable permissions. Training on support chats, using personal data for look-alike models, or merging datasets across business units — all build silent liability.
In SEA, where trust in digital platforms can flip quickly and policy moves fast, consent debt is not just a PR risk — it’s existential. The antidote is a Consent Ledger per user:
- Show what data you hold, what model it trains, and what purpose it serves.
- Allow purpose-scoped revocation (“Use my data for personalisation, but not training”).
- Publish a quarterly data use changelog users can actually read.
Do this before regulators force you to — and you turn trust into a competitive moat.
Act six: What makes SEA unique
Unlike the US or EU, Southeast Asia is a patchwork of regulatory environments and cultural norms:
- Policy heterogeneity: You need switchable privacy modes, not one global setting.
- Messaging-first commerce: Trust is often mediated through WhatsApp, LINE, or Telegram, so verification and decision-summaries must travel across chat apps.
- Localised moderation norms: Global one-size policies fail; you need language- and culture-specific model adapters to avoid political or cultural backlash.
Startups that build with these factors in mind will not only comply but resonate with users in a region where trust is as much about face as it is about function.
Also Read: Building trust in the age of AI: Lessons for Southeast Asia’s startups
Act seven: The guiding principle
After dozens of conversations with founders, regulators, and users, I keep coming back to one principle:
Trust must be actionable.
If a reasonable user cannot change the outcome of an AI decision, you haven’t built a trust feature — you’ve built a brochure.
The metric that matters is not “number of users who viewed our policy page.” It’s median time from user appeal to resolution with restitution.
When startups measure and optimise that, they do more than comply — they set the tone for what trustworthy AI feels like. And in a region as dynamic as Southeast Asia, that could be the edge that keeps your product in play when the next wave of AI regulation — or public backlash — hits.
Your turn
If you launched your AI feature tomorrow, would your users have the power to undo, appeal, or reverse a decision — or would they just be left reading a policy page?
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.
Image courtesy: Canva Pro
The post Trust isn’t a FAQ — it’s a lever: How startups can engineer user power into AI appeared first on e27.
