Posted on

Singapore unveils world-first AI governance framework for Agentic AI at Davos

Singapore has launched a new Model AI Governance Framework for Agentic AI, positioning itself at the forefront of global efforts to regulate the responsible deployment of advanced AI systems, including AI agents.

Announced on January 22 at the World Economic Forum, the framework was introduced by Minister for Digital Development and Information Josephine Teo. Developed by the Infocomm Media Development Authority, the framework is the first in the world to provide a comprehensive, practical guide for organisations deploying agentic AI responsibly.

The framework builds on Singapore’s original Model AI Governance Framework for AI, introduced in 2020, and reflects the country’s balanced approach to AI governance. It seeks to put guardrails in place to manage risks while leaving room for innovation, ensuring that the benefits of AI agents can be realised in a trusted and safe manner.

Unlike traditional or generative AI, AI agents can reason, plan across multiple steps and take actions on behalf of users to achieve specific objectives. These capabilities allow organisations to automate repetitive tasks in areas such as customer service and enterprise productivity, freeing up employees to focus on higher-value work and supporting broader sectoral transformation.

However, the increased autonomy of AI agents also introduces new risks. These systems may have access to sensitive data and the ability to make changes to their environment, such as updating databases or executing payments. This raises the risk of unauthorised or erroneous actions, as well as challenges around human accountability. One concern highlighted is automation bias, where users may over-trust AI agents that have performed reliably in the past.

Also Read: Voice does not expire: How AI helps us keep our stories alive

To address these issues, the new framework emphasises that humans remain ultimately accountable for the actions of AI agents. It stresses the importance of maintaining meaningful human control and oversight throughout the deployment and use of agentic AI.

Targeted at organisations deploying AI agents either in-house or through third-party solutions, the framework offers a structured overview of key risks and emerging best practices. It provides guidance across four main dimensions: assessing and bounding risks upfront by selecting appropriate use cases and limiting agent autonomy and access; ensuring human accountability through clearly defined approval checkpoints; implementing technical controls throughout the AI agent lifecycle, including baseline testing and controlled access to approved services; and enabling end-user responsibility through transparency, education and training.

The framework was developed with input from both government agencies and private sector organisations. April Chin, co-chief executive officer of Resaro, said the framework fills a critical gap in policy guidance by addressing the specific risks associated with agentic AI. She noted that it helps organisations define agent boundaries, identify risks and implement mitigations such as agentic guardrails.

IMDA described the framework as a living document and said it welcomes feedback from interested parties, as well as case studies demonstrating responsible deployments of AI agents. Building on its existing starter kit for testing large language model-based applications, the authority is also developing additional guidelines focused on testing agentic AI applications.

The post Singapore unveils world-first AI governance framework for Agentic AI at Davos appeared first on e27.