
Most Singaporeans would have first encountered artificial intelligence (AI) through the likes of ChatGPT — OpenAI’s famous (or infamous) chatbot. Its ability to generate human-like responses with impeccable fluency has captured public imagination, sparking conversations and debates on how AI might transform the way we live and work.
Across Singapore and the broader Asia Pacific region, AI development has moved beyond novelty and into mainstream adoption. It is now an essential enabler of digital transformation, helping businesses scale efficiencies, address labor constraints, and unlock new avenues of value creation.
In Singapore, where digital readiness is among the highest in the region, AI is being embedded into national strategies — from precision medicine and port operations to smart city services. At the regional level, other markets are increasingly leveraging AI to optimise supply chains, improve the customer experience, and empower digital ecosystems across financial services, manufacturing, and retail.
Far from being an emerging technology, AI has its roots in decades of research. But its real-world impact has exploded in recent years, thanks to massive improvements in compute power and data availability.
In Singapore, this has led to expanded AI applications across government and industry. Initiatives like AI Singapore have spurred local talent development, while targeted collaborations with institutes of higher learning continue to advance responsible AI innovation.
Also Read: The fine art of building presentations in the AI era
AI is now deeply embedded in our everyday lives and the fabric of enterprise. It automates routine tasks, augments decision-making with real-time data insights, and enhances operational resilience. In Singapore, where many sectors are already embracing AI to overcome productivity plateaus, the technology’s ability to improve returns while reducing complexity is proving indispensable.
Generative AI (GenAI) is attracting significant interest. According to Lenovo’s CIO Playbook 2025, titled It’s Time for AI-nomics, Asia Pacific organisations expect a 3.6x ROI on average from AI initiatives, with many focusing on ITOps, software development, and cybersecurity as key areas for implementation.
Contrary to concerns about job displacement, AI is also creating new employment pathways and enriching professional development. By eliminating repetitive tasks, it empowers employees to focus on innovation and strategic problem-solving. In Singapore, workforce reskilling is a national imperative — AI adoption is aligned with this by enabling continuous upskilling and higher-value opportunities for professionals.
With these benefits, it’s no surprise that investment is pouring into AI infrastructure. In Asia Pacific, 65 per cent of enterprises now rely on on-prem or hybrid cloud infrastructure for AI workloads, especially in countries like Singapore where data sovereignty, latency, and compliance are critical concerns. To accelerate this momentum, Lenovo has invested US$100 million in its AI Innovators program, delivering over 165+ AI solutions and more than 80+ AI-optimised platforms.
But with great adoption comes greater responsibility. As GenAI use grows, so do concerns about its ethical implications. Governments and businesses alike must ensure that AI systems are transparent, fair, and explainable — particularly as they are applied in sensitive contexts such as healthcare, law enforcement, or public services.
Also Read: The ageing economy: Why investors should bet on longevity over AI
Singapore’s own approach to AI governance is widely recognised as a benchmark. Its Model AI Governance Framework, developed by the Infocomm Media Development Authority and Personal Data Protection Commission, exemplifies how regulation can foster innovation while managing risk.
Still, the responsibility doesn’t fall on regulators alone. Businesses must actively participate in shaping responsible AI practices. In fact, across APJ, only 25 per cent of organisations have fully enforced AI GRC (governance, risk, compliance) policies — a gap that must be closed if trust and transparency are to keep pace with progress.
Bias, in particular, is a persistent challenge. Algorithms reflect the data and assumptions used to build them. If unchecked, they can reinforce historical inequities, with potentially harmful outcomes. Testing, retraining, and human oversight are crucial to mitigate such risks. As industry watchers and advocates for AI safety have noted, AI does not have the capability to govern itself, GRC must be embedded into the organisational fabric from day one.
To that end, regulations must evolve in step with the technology. That means ongoing collaboration between policymakers, academia, and industry — not only to refine rules, but also to anticipate emerging risks. In fast-paced digital economies like Singapore’s, agility in governance will be as important as agility in innovation.
Ultimately, building a trusted and resilient AI ecosystem will require a whole-of-society effort. From regulators to developers, enterprises to end-users — every stakeholder has a role to play in shaping an AI future that is inclusive, secure, and beneficial for all.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Join us on Instagram, Facebook, X, LinkedIn, and our WA community to stay connected.
Image courtesy: Canva Pro
The post Building AI on a foundation of accountability appeared first on e27.
