Posted on

Before you can give feedback: Creating the culture where it can be heard

Imagine this.

You’ve just read a brilliant guide on giving feedback.

You’ve mastered the frameworks: Radical Candour, HHIIPP, GAIN – and you’re ready to build a high-performance culture. You pull a team member aside to deliver a piece of well-intentioned, perfectly structured critical feedback. You’re humble, helpful, and immediate. But instead of a constructive dialogue, you watch the light in their eyes die as the team member retreats into a shell of resentful compliance.

A week later, their LinkedIn status quietly flips to “Open to Work”.

What fresh hell is this? You did everything by the book.

Here, we’ll explore the concept of psychological safety and why this is the most brutally practical predictor of your team’s success. We will dissect what it is, what it isn’t, and how to diagnose its conspicuous absence – especially within the nuanced cultural landscape of an Asian startup.

What psychological safety actually means (and what it doesn’t)

The definition

Let’s be honest. “Psychological Safety” sounds like something you’d discuss at a corporate retreat involving trust falls. Harvard’s Amy C. Edmonson, who put this concept on the map, defines it as a “shared belief that the team is safe for interpersonal risk taking.”

In simple English, it’s the feeling that you can speak up, admit a mistake, ask a “stupid” question, or challenge the status quo without being publicly flogged for it.

This isn’t just a nice-sounding theory. When Google embarked on its Project Aristotle to build the perfect team, they crunched data from hundreds of teams. They found that the single most important dynamic – not individual brilliance, not team size, not even co-location – was psychological safety. It was the secret sauce that allowed talent to translate into results.

The critical misconceptions

Many founders who pride themselves on a high standard or “tough” culture instinctively recoil from the term. They equate safety with softness. They mistake it for a lack of accountability. Let’s dismantle these myths.

  • Myth: It means lowering standards. Reality: It means creating an environment where people feel safe to stretch and strive for high standards without fear of blame if they fall short.
  • Myth: It’s about being “nice.” Reality: It’s about being direct, candid, and challenging, but with a foundation of respect and a shared commitment to learning. It’s not about avoiding conflict, but about engaging in it productively.
  • Myth: It eliminates accountability. Reality: It’s the very thing that enables accountability. When people feel safe, they are more likely to take ownership of their mistakes, making it possible to hold them accountable for learning and improving from them.
  • Myth: It’s for weak or fragile teams. Reality: It’s the defining characteristic of the most resilient, innovative, and high-performing teams. Fear-based cultures are the ones that are truly fragile, as they are unable to adapt to change or learn from failure.

Here lies a paradox for all founders to understand: the goal is not to create a comfortable, low-pressure environment. The goal is to pair high psychological safety with high standards. High psychological safety + high standards = The learning zone. This is where innovation, resilience and sustainable high performance live. Without safety, high standards simply create an Anxiety Zone, a toxic pressure cooker of burnout and attrition.

Also Read: Are you a human resource?

Why psychological safety is the #one predictor of team performance

The hard data on performance and retention

Let’s talk numbers. The data shows an alarming outcome about the cost of fear.

  • Your best people are leaving: A 2024 BCG study found that employees in low-safety environments are four times more likely to quit within a year (12 per cent, vs three per cent). For diverse talent, the numbers are even more stark: High safety increases retention by 4x for women and BIPOC employees, and 6x for LGBTQ+ employees. In a talent war, you are unilaterally disarming.
  • You’re bleeding productivity: Gallup research connects a climate where opinions are valued to a 27 per cent reduction in turnover, a 40 per cent drop in safety incidents, and a 20 per cent boost in productivity. Fear is expensive. It’s a tax on every single action your team takes.

What these numbers represent is the unlocking of human potential. In a safe environment, people stop spending energy on political manoeuvring and self-preservation and start spending it on what you hired them for: solving hard problems. They ask for help, they admit mistakes, they share half-baked ideas that just might be brilliant, and they tell you the truth, even when it’s ugly. For a startup, where learning speed is the only true competitive advantage, this isn’t a luxury; it’s the entire game.

Diagnosing psychological safety — Is your team actually safe?

The Founder is often the last to know about the kingdom’s rotten problems. Forget the obvious – the shouting matches, the public sharings. The real indicators of low psychological safety are far more insidious. The silence in your meetings isn’t consensus, it’s a symptom.

The subtle signs Founders often miss

  • The absence of bad ideas: If you’re only hearing well-polished, safe suggestions, it’s not because your team is brilliant. It’s because they are terrified to share the messy, half-formed thoughts where real innovation begins.
  • The echo chamber: Your ideas are met with vigorous, uncritical agreement. This isn’t a sign of your genius; it’s a sign that your team has learned it’s easier to agree with you than to engage in debate.
  • The proliferation of process: When people are afraid to use their judgment, they cling to process like a life raft. They will follow a bad process to the letter, because the process can’t be fired.
  • The backchannel: The real conversations are happening on Slack DMs, in hushed whispers by the coffee machine, and in post-meeting debriefs where everyone says what they really think. The meeting itself is a theatre.
  • The solo hero: People would rather struggle alone for days than ask for help and risk looking incompetent. They are optimising for the appearance of competence, not for the speed of execution.

The ultimate litmus test: The flow of bad news

If you want one, brutally simple diagnostic, ask yourself this: When was the last time someone on your team brought you truly bad news, early?

Not after it was already a multi-alarm fire, but when it was just a wisp of smoke. As Amy C. Edmonson warns, “If there’s no bad news, remind yourself: It’s not that it’s not there. It’s that you’re not hearing about it.” The silence is not golden. It’s the sound of your company failing in slow motion.

Also Read: Embracing sustainability: A circular design perspective on e-waste

The Asian startup context — Cultural challenges you must navigate

Now, for our readers in Singapore, Hong Kong, and beyond: if you’ve tried to implement a “speak truth to power” culture and been met with horrified silence, you’re not alone. While the principles of psychological safety are universal, their application is not. For founders in Asia, simply importing Western frameworks without cultural translation is a recipe for failure.

The power distance problem

The hierarchical nature of many Asian societies and different communication norms create unique challenges that must be understood and addressed. In many Asian cultures that score highly on Hofstede’s Power Distance index, the social fabric is woven with threads of hierarchy and deference. Challenging a superior isn’t just a disagreement; it can be perceived as disrespect.

The concept of “saving face” isn’t just a weakness; it’s a fundamental social lubricant.

When a Western-trained founder encourages their team to “challenge everything”, they think they are fostering innovation. But to an employee raised in a high-context, hierarchical culture, they may be asking them to commit a deeply uncomfortable social transgression.

Lost in translation

The very language of psychological safety is a stumbling block. As we’ve noted, “interpersonal risk taking” is a foreign concept. When you ask a team member if it’s “safe” to take a risk, they are likely thinking about financial or project risk, not the risk of disagreeing with you in a meeting. This cognitive mismatch renders most standard surveys and one-size-fits-all approaches useless.

Adapting psychological safety for Asian startups

Building psychological safety in Asia requires you to be a cultural translator, not a doctrinal importer.

  • Reframe the mission: Don’t ask people to challenge you. Ask them to honour the company’s mission by stress-testing ideas. Frame dissent not as a challenge to authority, but as a duty to the collective goal.
  • Create structured channels: Don’t start with open-floor debates. Begin with structured, safer channels. Use written feedback, 1-on-1 sessions, or even anonymous tools as a bridge. The goal is to build the “muscle” of dissent in a way that feels culturally accessible.
  • Lead the face-saving mode: You, the founder, must be the first to “lose face”. Publicly admit your own mistakes. Thank people for correcting you. When you demonstrate that your own ego is secondary to the best outcome, you give your team permission to do the same.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post Before you can give feedback: Creating the culture where it can be heard appeared first on e27.

Posted on

Innovation oversight and growth governance: Boards as enablers of strategic opportunity

Innovation is often framed as the domain of executives, R&D teams, or product leaders. Boards are traditionally viewed as monitors of risk, finance, and compliance. But in Asia’s fast-moving markets, innovation is a core governance responsibility. Boards that fail to actively oversee innovation risk stagnation, missed growth opportunities, and competitive irrelevance.

The future-ready board does not replace management in innovation but provides strategic guidance, challenge, and oversight, ensuring that investments in growth initiatives align with long-term value creation.

Why boards must own innovation oversight

Several forces make innovation governance a board priority:

  • Rapid digital disruption: AI, cloud platforms, fintech, and platform ecosystems are transforming entire industries.
  • Global competitive pressures: Companies in Asia compete with both established multinationals and agile startups.
  • Investor expectations: Growth and innovation metrics increasingly influence investor confidence and valuation.
  • Complexity of capital allocation: Boards must ensure innovation budgets are optimised, ROI is monitored, and strategic alignment is maintained.

Boards that fail to actively engage risk leaving executives unchallenged, increasing the likelihood of misaligned innovation investments.

A board framework for innovation oversight

Effective boards oversee innovation across strategy, risk, and culture:

Strategic alignment

  • Ensure innovation initiatives align with long-term business objectives.
  • Evaluate emerging markets, technology trends, and customer needs as part of the strategic agenda.
  • Assess portfolio balance: core, adjacent, and transformational initiatives.

Risk-return oversight

  • Monitor the innovation pipeline with clearly defined success metrics and stage-gates.
  • Encourage scenario planning for high-impact, low-probability innovation failures.
  • Understand regulatory, reputational, and operational risks associated with new initiatives.

Talent and culture enablement

  • Assess whether the organisation has the right skills, mindset, and leadership to innovate.
  • Promote cross-functional collaboration and experimentation while maintaining accountability.
  • Monitor incentives and culture to ensure innovation is rewarded and risk-taking is disciplined.

Also Read: Cybersecurity and data governance in the boardroom: A strategic imperative for Asian boards

Key questions boards should ask

Boards should challenge management with questions that drive both oversight and strategic value:

  • What are our innovation priorities, and how are they linked to corporate strategy?
  • How do we balance short-term performance pressures with long-term experimentation?
  • Which emerging technologies or business models could disrupt our market?
  • How do we track adoption, impact, and ROI of innovation initiatives?
  • Are we building an organisational culture that supports disciplined risk-taking?

The answers allow boards to influence direction without micromanaging execution.

Innovation metrics for boards

Boards can measure innovation through a combination of leading and lagging indicators:

  • R&D expenditure relative to revenue
  • Time-to-market for new products or services
  • Success rate of pilot programs and proof-of-concepts
  • Adoption and engagement metrics for digital solutions
  • Strategic alignment and contribution to long-term growth

Tracking these metrics ensures that innovation efforts are measurable, monitored, and aligned with enterprise value.

Boards as guardians of responsible innovation

Innovation carries inherent risk — regulatory, reputational, financial, and ethical. Boards must ensure that growth initiatives:

  • Comply with laws, regulations, and industry standards
  • Incorporate ethical considerations, especially for AI, data, and sustainability initiatives
  • Maintain transparency and accountability in decision-making
  • Include clear escalation and reporting mechanisms for unexpected outcomes

Boards that integrate these principles create responsible innovation, safeguarding enterprise resilience while enabling growth.

Also Read: Forward-looking governance: Why Asian boards must think like futurists

The independent director’s contribution

Aspiring independent directors bring value by:

  • Providing cross-industry insights on emerging technologies and business models
  • Challenging assumptions and encouraging robust debate on strategic bets
  • Ensuring balance between risk and reward in innovation investments
  • Supporting management in building a culture of disciplined experimentation

Their independent perspective enhances governance while empowering executives to innovate boldly yet responsibly.

Conclusion: Growth governance as a board imperative

Innovation is no longer optional; it is a strategic requirement. Boards that integrate innovation oversight into governance:

  • Protect against wasted investments and strategic missteps
  • Accelerate value creation by guiding strategic experiments
  • Strengthen enterprise resilience by balancing risk and reward
  • Foster an organisation-wide culture of disciplined innovation

For Asian boards, the challenge is clear: shift from passive approval to active governance of growth initiatives. The boards that do so will lead companies to sustainable, long-term success in increasingly competitive and unpredictable markets.

This article was first published on The Boardroom Edge.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Header image credit: Canva

The post Innovation oversight and growth governance: Boards as enablers of strategic opportunity appeared first on e27.

Posted on

When public service apps forget the people they serve

The story began when I witnessed my mother struggle with a mobile application to monitor her pension salary. What should have been a simple authentication process turned into repeated attempts to scan her face, adjusting angles, moving between rooms, and changing lighting, only to end with the app crashing without explanation. 

When she asked me to contact customer service, I realised something more troubling. There was no clear support channel, no customer service, just an application that failed silently.

Her story was just another dramatic episode. Days later, I tried to extend my vehicle registration after being informed that the process was available online. But the application told another. After following every instruction, I discovered that the “online process” didn’t actually exist. The only option left was to queue offline, again.

These experiences highlight a deeper issue beyond technical glitches. Many public service applications are built to digitise procedures, not to serve citizens. Empathy and user experience are treated as secondary priorities in this case.

Premature digitalisation 

Digital transformation in public services is always branded to build a seamless process. However, it contradicts what the user experiences in real life. I gathered several feedbacks from public service apps users, such as:

Source: Taken from BPJS Google Playstore Review

Source: Taken from Andal by Taspen Google Playstore Review

Taken from National Digital Samsat Google Playstore Review: Here

 Source: Taken from National Digital Samsat Google Playstore Review

User reviews on Google Play Store for applications such as BPJS, Taspen, and the National Digital Samsat reveal a consistent pattern. Despite high star ratings, recent reviews continue to surface unresolved issues, such as failed authentication, unclear instructions, system errors, and a lack of responsive customer support. Even in early 2026, many of these complaints repeatedly happened.

What makes this situation more problematic is the lack of choice. These applications are not optional. For many services, they have become the primary and the only gateway. When digital access fails, users are left without clear alternatives, trapped in a system that offers neither guidance nor accountability.

This approach ignores the diversity of users that public service apps must serve. Platforms like BPJS and Samsat cater to citizens ranging from young adults to elderly citizens, while Taspen primarily serves users above 60 years old or retirees. Designing a single experience without adjusting to different levels of users only creates exclusion. As seen in cases like elderly users struggling with basic authentication flows, the result is not empowerment, but frustration.

Also Read: Building for fragmentation: How ASEAN SaaS leaders architect optionality into a paradox

The intention behind digitising public services is valid. However, launching an app is not the finish line. Digitalisation requires continuous user education, clear instructions, regular improvements, and accessible human support. Without these, “going digital” becomes a one-time project rather than a long-term commitment.

What ultimately emerges is not a lack of technology, but a lack of empathy. Many public service applications are designed to satisfy bureaucratic workflows, while human–computer interaction is treated as a secondary priority.

Next step: Mitigation

Criticising premature digitalisation will not solve the situation. The most important thing to focus on is how these public service apps can accommodate the needs of the users while fulfilling the requirements of being seamless and user-friendly. 

  • First, empathy must be treated as the core design principle, not as a secondary concern. This means conducting user research across age groups, regions, and levels of digital literacy. Understand that some Indonesian users are elderly citizens, and these people require closer attention during the research.
  • Second, digitalisation is created to cut off long bureaucratic processes. Make sure that the app can shorten the administrative procedure and help users avoid long queues at the offline counter.  
  • Third,  public service applications need clear and transparent accountability. Features like step-by-step guidance, error message, customer service button, and dedicated customer service agents are not luxurious features; instead, they are all essential infrastructure. So, when the system fails, users can easily contact the person in charge.
  • Lastly, an app must be treated as a living product, not a static prototype. Continuous update, usability testing, and an endless iteration process are necessary to maintain trust from the users.

Digital transformation succeeds not when all processes are moved online, but when a technology reduces anxiety, genuinely helps the lives of people, and builds a supporting ecosystem.  

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Header image credit: Canva

The post When public service apps forget the people they serve appeared first on e27.

Posted on

PR for LLM search: How to earn citations without gaming algorithms

Search is no longer just about ranking links. AI systems now quote sources. If you appear in those answers, your brand gets visibility, trust, and most importantly, clicks. If you don’t, you disappear.

A March 2025 SEMrush study found Google’s “AI Overviews” surfaced in 13.14 per cent of all US queries, nearly double the share from January. Brands that appeared in these AI answers saw conversion rates 4.4x higher than traditional organic traffic.

But how do you earn visibility without resorting to shortcuts that could backfire?

What the data shows

SEMrush analysis highlights how fragmented the AI landscape is.

  • ChatGPT leans heavily on Wikipedia and Reddit, with tech sites like TechRadar and G2 also surfacing.
  • Google AI Mode cites productivity blogs and platforms like Zapier, Medium, and LinkedIn.

In finance, the split is just as stark: ChatGPT draws on Reddit and Wikipedia, while Google AI Mode prefers Bankrate, NerdWallet, and Investopedia.

The lesson: each AI engine has its own source bias. Founders can’t assume one article in a mainstream tech or business outlet guarantees coverage everywhere.

This fragmentation is why PR leaders need to think of AI visibility as a portfolio strategy. Just as financial advisors recommend diversification, content strategists should diversify their evidence assets across formats, publishers, and domains. The more touchpoints an LLM has to draw from, the more resilient your brand’s visibility becomes.

Actionable takeaways

  • Know the answer marketplace: AI search is the new SEO, but success depends on verifiable evidence and trusted sources, not keyword stuffing.
  • Invest in PR strategy, not just spend: Early-stage founders often push budgets into visibility at any cost. But AI systems reward authority and credibility, not press release blasts.
  • Build “evidence assets”: Think beyond brand storytelling. Publish FAQs, explainers, glossaries, and data-backed studies that answer canonical questions clearly. These assets are the ones LLMs like to cite.
  • Turn PR from awareness to performance: PR has long been seen as a tool for credibility and brand awareness, but AI search is changing that equation. When your coverage or evidence-rich content is cited in an AI-generated answer, it can drive measurable traffic and conversions. Not just impressions. In this sense, PR now plays directly into performance metrics like clicks, leads, and customer acquisition. The shift is clear: evidence and citations translate into action, not just awareness.
  • Understand AI question types: LLMs handle “how,” “what,” and “compare” questions differently. Audit how your industry is being represented, and design assets that map to those question patterns.
  • Treat visibility as a flywheel: Once you appear in AI answers, the effect compounds. More citations build more authority, which reinforces discoverability across engines. This is where strategic patience pays off.
  • Balance brand and community signals: SEMrush data shows that community-driven platforms like Reddit surface heavily in ChatGPT. Participating ethically in these communities, by providing expertise rather than self-promotion, can help seed organic visibility.

Also Read: When streaming prices ignore how people actually watch

The playbook: PR for AI discoverability

A repeatable framework is emerging:

  • Discovery map: Build a query universe that covers your company, its category, competitors, and the key problem statements.
  • Authority stack: Anchor your narrative in authoritative explainers, expert quotes, and third-party validation.
  • Citable assets: Create pages that LLMs want to reference. Resources like fact sheets, FAQs, and original or proprietary data sets.
  • Structure for machines: Use schema.org markup, consistent entity naming, canonical URLs, and alt text. For example, add FAQ schema to common questions, keep your company name consistent across pages, and describe charts/images with meaningful alt text so machines can interpret them.
  • Distribution blend: Focus on earned media and credible third-party research citations. Avoid over-relying on sponsored or paid placements.
  • Refresh cadence: Update statistics, add new references, and log changes transparently. Recency signals matter for both crawlers and model trainers.

Measurement: A new scorecard

You can’t manage what you can’t measure. Traditional SEO metrics miss the point. Instead, track:

  • Share-of-Answer (SoA): Per cent of queries where your brand appears in LLM responses.
  • Cross-engine coverage: Presence across ChatGPT, Google AI, Perplexity, and Gemini.
  • Citation diversity: Are you showing up via one placement or multiple?
  • Answer drift: How stable is your visibility week over week?
  • Evidence depth: How many of your assets provide original data or primary sourcing?

Leaders who adopt this scorecard not only understand their brand’s presence but can benchmark competitors and adjust strategy accordingly. Imagine being able to quantify that your rival is cited in 60 per cent of “best AI tools” answers, while you only appear in 20 per cent. That’s actionable intelligence.

AI traffic is overtaking traditional search

Semrush data shows that AI search traffic is rising rapidly and could soon rival or even surpass traditional organic search traffic. This trend is more than a technology shift. It’s a competitive warning. If you don’t begin optimising for LLM visibility now, competitors could establish themselves in AI results and capture the lion’s share of exposure and visits. While the foundations of LLM optimisation overlap with SEO, the two are not identical. The first step is understanding your brand’s visibility within AI-driven results and treating it as a distinct channel.

Also Read: AI at machine speed: What 2026 holds for cybercrime and enterprise security

Ethics: guardrails that matter

The temptation to “game” LLMs is real. How is this done? Through prompt injection, synthetic citations, or manipulating community forums. But the risks are higher than in SEO. A single flagged manipulation can result in removal or worse, reputational damage.

UNESCO’s guidelines on AI ethics stress building trust and accountability. For PR, that means:

  • Disclosing conflicts of interest.
  • Auditing assets for bias.
  • Avoiding misleading statistics or unverifiable claims.
  • Differentiating fact from opinion clearly, especially when quoted out of context.

Ethical visibility lasts longer. Tricks don’t.

Checklist: before you publish

  • Does this asset answer a clear question in plain language?
  • Is it backed by verifiable, citable data?
  • Is it structured for both humans and machines?
  • Would I be comfortable if this were quoted, without context, in an AI answer?
  • Does it align with the principles of transparency and accountability?

If the answer to all five is yes, you’re building for the right kind of visibility.

From a PR perspective, the same checklist applies to press materials and media kits too. Ensure that press releases cite reliable data, founder quotes are attributable and accurate, and fact sheets present details in a clear, structured way. These assets often become the raw material that journalists and AI systems alike draw from.

Closing thought

AI search is shifting PR from link placement to evidence placement. The brands that win won’t be those who find loopholes. They’ll be the ones that publish reference-grade content, earn citations in trusted outlets, and build credibility that machines and people recognise.

The opportunity is clear: treat AI visibility as a long-term reputational asset, not a quick growth hack. Just as SEO rewarded brands that invested in quality over gimmicks, LLM-driven search will favour those who combine ethics, structure, and consistency. For entrepreneurs and leaders, the play is simple: earn your citations.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Header image credit: Canva

The post PR for LLM search: How to earn citations without gaming algorithms appeared first on e27.

Posted on

The trust problem behind AI adoption and platform growth

Across industries, organisations are racing to adopt new technologies, particularly AI. But as adoption accelerates, a gap is becoming increasingly hard to ignore.

According to PwC’s 2025 Digital Trust Insights, 66 per cent of technology leaders now say cyber risk is their top concern. Yet only two per cent of organisations have achieved true, enterprise-wide cyber resilience.

This disconnect reveals a deeper issue. Cybersecurity is still treated as IT hygiene or operational insurance, rather than what it has become: economic infrastructure. Trust is the invisible layer that determines whether AI, digital commerce, and platforms can scale sustainably or stall under their own risk.

When AI adoption moves faster than governance

AI has unlocked enormous value, but it has also expanded attack surfaces faster than most organisations can respond.

The same PwC survey found that 67 per cent of organisations believe generative AI has increased their cyber attack surface. Inside companies, this shows up in familiar ways: employees experimenting with AI tools outside approved systems, browser-based agents automating tasks, and informal workflows built on powerful but poorly governed technology.

Innovation rarely waits for governance. But when guardrails lag too far behind, trust erodes quietly.

A clear example can be seen in the growing risks around AI prompt injection. OpenAI has acknowledged that prompt injection is a long-term security challenge that may never be fully solved. These attacks can manipulate AI systems into unintended actions, misinterpret user intent, or expose sensitive information — often without users ever seeing what went wrong.

The consequence is subtle but significant. Users may not understand the technical failure, but they experience the fallout. Confidence weakens. Adoption slows. Trust becomes fragile.

Platform-level trust requires structural security decisions

At scale, trust cannot be sustained through messaging alone. It requires architecture, governance, and oversight.

As digital platforms grow larger and more influential, cybersecurity is increasingly treated as a public trust issue rather than a private technical concern. Few examples illustrate this shift more clearly than TikTok’s US restructuring.

Also Read: Why protecting data today means proving you can restore trust

In January 2026, TikTok signed an agreement to divest 45 per cent of its US operations to a consortium of American investors, including Oracle, Silver Lake, and MGX. Under the new structure, Oracle will serve as TikTok’s trusted security partner, responsible for securing and managing US user data, auditing national security compliance, and replicating a US-specific version of the platform’s algorithm under new jurisdiction.

This move is not just about regulation. It reflects a broader reality: data residency, infrastructure control, and third-party oversight are now prerequisites for trust, not optional safeguards. For platforms handling massive volumes of personal data, cybersecurity decisions increasingly shape whether users, regulators, and partners remain willing to engage.

Security is becoming a user-facing trust signal

Cybersecurity is no longer invisible to users, whether platforms want it to be or not.

Recent Cybernews research, as cited in The Guardian, uncovered around 16 billion exposed login credentials circulating through infostealer malware datasets, prompting widespread warnings to reset passwords and strengthen authentication practices. At the same time, credential theft surged by 160 per cent in 2025, now accounting for one in five data breaches, driven by AI-powered phishing and Malware-as-a-Service tools.

These numbers matter because they translate into everyday experience. Compromised accounts lead to forced password resets, suspicious login alerts, and locked services. When trust breaks, users rarely make noise. They disengage quietly and permanently.

This is why security measures increasingly double as reputation management.

Meta’s global anti-scam campaign offers a clear illustration. In 2023, consumers reported losing more than US$10 billion to fraud, a 14 per cent increase year-on-year. 40 per cent of reported social media scams involved online shopping, often leaving victims without the products they paid for.

In response, Meta dismantled over two million scam-related accounts globally. These actions are not just enforcement measures. They are visible trust signals, designed to show users that protection is happening in real time, not buried in policy documents.

Trust drives commerce, especially in emerging digital markets

In digital commerce, trust is not a compliance cost. It is a growth multiplier.

Nowhere is this clearer than in Southeast Asia. According to Lazada and Cube’s research, nearly 90 per cent of online shoppers in the region are active in curated, high-trust Mall environments, and 90 per cent are willing to pay more when buying from these spaces. Notably, eight per cent of respondents are willing to pay over 30 per cent extra for what they perceive as a trust premium.

Also Read: Trust remains travel’s defining currency: Inside travel’s next operating model at MarketHub Asia 2026

These findings reinforce a critical point. Payments, identity verification, live commerce, and cross-border transactions all rely on cybersecurity as a foundation. When platforms feel safe, commerce flows. When they do not, growth stalls.

Cybersecurity is economic infrastructure, not insurance

Taken together, the pattern is clear.

AI is increasing exposure. Platforms are restructuring around security. Consumers are withdrawing trust when risks feel unmanaged. Commerce is rewarding safer ecosystems.

Over the past year, I have personally received multiple notifications informing me that my passwords were exposed in data breaches. Some platforms forced immediate resets. Others quietly suggested updates “as a precaution”. None of these moments felt dramatic on their own. But collectively, they changed how I interact with digital services.

I hesitate before connecting to new apps. I am more selective about where I store payment details. I think twice before adopting new tools, even when they promise speed or convenience.

This is what cybersecurity looks like when it becomes economic infrastructure. It not only prevents worst-case scenarios. It determines who gets to participate confidently in the digital economy and who opts out.

Security, in this context, is no longer insurance against rare disasters. It is the foundation that allows digital systems to function at scale.

Trust is what allows innovation to scale

Innovation moves fast. Trust determines how far it goes.

Security is often framed as the opposite of speed. In reality, it is what makes speed sustainable. When users trust platforms, they experiment more. When businesses trust infrastructure, they invest deeper. When ecosystems trust their safeguards, innovation compounds instead of stalling.

The next phase of the digital economy will not be won by those who ship the fastest features or adopt the most advanced AI. It will be shaped by those who treat cybersecurity as a trust layer rather than a technical afterthought.

For founders, this means building security into product decisions early.

For platforms, it means making protection visible and meaningful.

For policymakers, it means recognising cybersecurity as critical economic infrastructure.

Because in a digital economy built on speed, trust is what allows progress to last.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Header image credit: Canva

The post The trust problem behind AI adoption and platform growth appeared first on e27.