
In many parts of Asia, we’re used to thinking of tech as the great equaliser. From Indonesia’s fintech revolution to India’s edutech boom to Japan’s automation economy, digital innovation has lifted millions.
As AI begins to shape not just what we use but also how we work, hire, and govern, we must confront a harder truth: If AI is built without inclusion, it won’t just replicate bias; it will scale it.
Right now, we’re at a turning point. The next wave of tech giants, especially in fast-growing Asian economies, will be judged not just by how fast they build, but by how responsibly they do it. Inclusion isn’t a Western HR buzzword. It’s a leadership standard, a competitive differentiator, and increasingly, a core pillar of risk management.
And nowhere is this more urgent than in how we build and lead with AI.
Why inclusion can’t be a side project
Let’s be blunt. AI doesn’t fix bias, it learns it. From data. From decisions. From developers.
If your hiring model is trained on past employee profiles, it may quietly prefer male engineers from elite schools. If your loan scoring algorithm reflects historical banking access, it may penalise borrowers from rural provinces. If your product voice is built around one cultural perspective, it might alienate, or even offend, others.
These aren’t just hypotheticals. We’ve seen AI mishaps worldwide. But the risk in Asia is even more layered. Our region holds 60% of the world’s population. It includes the world’s largest Muslim population, its fastest-aging societies, the most linguistically diverse cultures, and massive informal economies.
Also Read: AI for the real world: SEA’s cost-efficient playbook is winning investors over
If AI products don’t include these nuances from the start, they will fail large parts of this continent.
How Asian tech can build DEI into product DNA
Let’s look at this not as a crisis but as an opportunity for leadership.
- Design with, not for
Bring diverse users into your product cycle early. And I don’t mean a last-minute focus group. I mean real co-design.
- If you’re building for India’s gig economy, include single mothers from Tier-two cities in your prototyping.
- If your voice assistant is going live in Malaysia, involve users with heavy accents, multilingual homes, and vision impairments.
- If you’re launching a mental health app in Korea or Japan, consider cultural stigma around seeking help and adjust tone and experience accordingly.
- Audit bias before it becomes PR
Create internal “bias squads” whose job is to break your system before the public does. Make it someone’s responsibility to ask:
- Who does this product fail?
- Who has no access to it?
- What happens when it gets things wrong?
Celebrate this work. Don’t bury it.
- Measure fairness, not just accuracy
We love KPIs in Asia. So let’s build fairness into them. Don’t just track click-through rates or error margins. Track:
- Representation across user feedback,
- Gender or income parity in outcomes,
- AI misfires by language or region.
Make inclusion a product metric, not just a marketing message.
Rethinking leadership: DEI-AI literacy is your new skill gap
In many Asian companies, DEI still lives in HR. But AI lives everywhere. So what happens when engineering moves faster than ethics?
Here’s what leaders must do, urgently:
- Get fluent in the risks
You don’t need to code Python, but you do need to understand:
- How AI makes decisions.
- Where data bias comes from.
- What to do when AI gets it wrong.
Think of it like cybersecurity. You don’t run the firewall yourself, but you sure need to know when it’s leaking.
- Learn through real-world scenarios
Thailand’s AI landscape faces pressing gender and inclusion challenges that deserve greater attention. Ranked 79th in the 2022 Global Gender Gap Index, with a slight year-on-year decline, the country reflects persistent cultural stereotypes, including portrayals of women in domestic roles.
While Thailand stands out for its visible LGBTQ+ representation in media, systemic discrimination and misrepresentation remain, and these biases risk being hardcoded into AI systems through unbalanced training data. Ethnic minorities, too, face disproportionate exclusion and online bias.
Also Read: AI is changing work in Singapore — Confidence is the missing link
Experts warn that AI in Thailand often lacks adequate representation of women, LGBTQ+ individuals, and other marginalised groups, not only in datasets but also in testing and monitoring processes. In many cases, gender distinctions are lost in aggregated data, and few AI projects undergo rigorous gender impact assessments.
Although general AI ethics frameworks are in place, they may fall short without more targeted safeguards. A dedicated gender and inclusion module could play a critical role in correcting representational gaps and strengthening bias mitigation. For Asia’s fast-growing tech ecosystem, Thailand serves as a reminder: inclusive AI design isn’t just good ethics, it’s essential for building trustworthy, culturally relevant technology.
Watching leaders react and course, correct, was eye-opening.
The lesson? You need to feel the problem before you can solve it.
- Use AI to fix culture, too
Why not use AI to look inward?
- Who’s speaking up in meetings?
- Who gets promoted fastest?
- Who’s quietly disengaging?
Inclusion isn’t just hiring more women or sponsoring International Women’s Day. It’s a system. AI can help spot where that system is leaking talent, voice, or trust.
Controversial but necessary: Embracing complexity
Let’s not pretend this work is easy. Especially in Asia, where conversations around gender, caste, ethnicity, nationalism or religion can be sensitive or political.
So here’s some real talk:
Inclusion will sometimes feel “unfair”
You may have to give extra attention to groups historically left behind. That can create pushback. Some will ask: “Why are we prioritising this group?”
Answer: “Because the system didn’t before.”
Inclusion isn’t about punishing anyone. It’s about correcting imbalance. If that feels uncomfortable, it probably means it’s working.
- Not all DEI is good DEI
When done badly, DEI becomes performative. Slapping a diverse image on your homepage while your data team is 90% male doesn’t build trust. And yes, overcorrecting representation, like what happened with Google’s image generator, can backfire if not grounded in context.
Also Read: Generative AI: The unstoppable force reshaping work and engagement across SEA
Be thoughtful. Be intentional. Don’t replace one stereotype with another.
The inclusive tech playbook for Asia’s startups
If you’re a founder or product leader, here’s where to start:
| Step | Action |
|
Assess your products for exclusion risks. Include language, income, ability, gender, region. |
|
Involve real users from different communities early — not just as testers, but as collaborators. |
|
Run DEI-AI literacy sessions for your leadership team. Use real local case studies. |
|
Track fairness alongside product performance. Publish internal dashboards. |
|
Be the company that talks about inclusion publicly. It attracts trust and talent. |
Final word: Inclusion is Asia’s edge
Asia doesn’t need to copy Silicon Valley’s mistakes. We have the opportunity to lead differently, to build AI that reflects our diversity, our cultures, our realities. Our superpower isn’t just speed or scale. It’s the ability to blend innovation with values rooted in community, family, and balance.
If we lead with that spirit, tech in Asia won’t just be fast. It’ll be fair. And that might just be our biggest advantage of all.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Join us on Instagram, Facebook, X, LinkedIn, and our WA community to stay connected.
Image courtesy: Canva Pro
The post Inclusive AI isn’t optional – it’s Asia’s tech advantage appeared first on e27.
