Posted on Leave a comment

Regulating AI in Asia Pacific: Can companies keep up?

As artificial intelligence (AI) adoption accelerates, companies across the Asia Pacific (APAC) face increasing regulatory scrutiny.

A new report highlights that 77 per cent of businesses are already subject to AI-related regulations or expect to be within five years. Additionally, 90 per cent anticipate compliance obligations related to AI-adjacent laws, including cybersecurity, data protection, and consumer rights.

Also Read: Why Generative AI requires a paradigm shift in technology and culture

Governments across APAC are responding swiftly. Singapore, a leader in artificial intelligence governance, has introduced multiple initiatives, including AI Verify, the Model AI Governance Framework for Generative AI, and Safety Guidelines for Model and App Developers.

India has taken steps with the Digital Personal Data Protection Act 2023 and an AI regulatory sandbox. Australia is proposing mandatory guardrails for high-risk AI applications, while Saudi Arabia has rolled out AI ethics guidelines and generative AI regulations.

South Korea took a significant step in 2023 by introducing an AI liability law, setting a precedent for how businesses must manage AI risks.

Other countries, including China and Brazil, are also refining intellectual property and copyright laws to account for artificial intelligence-generated content.

Businesses struggle with AI risks despite revenue potential

While business leaders acknowledge the value of responsible AI, many remain unprepared to manage its risks effectively. Research suggests that companies pioneering responsible artificial intelligence could see an 18 per cent increase in AI-related revenue, yet most organisations lack comprehensive risk mitigation strategies.

One of the biggest challenges is underestimating the scale of artificial intelligence risks and the regulatory landscape’s complexity. Without robust compliance frameworks, companies risk falling behind as governments ramp up enforcement efforts.

“Organisations that fail to implement responsible artificial intelligence governance will struggle to scale AI innovation while meeting regulatory expectations,” the report warns.

The road to responsible AI leadership

To navigate the evolving AI landscape, businesses must adopt a proactive approach to responsible AI. Experts outline five key priorities for organisations aiming to mitigate risks while driving innovation:

  • Establish AI governance and principles: Develop clear policies, guidelines, and controls to ensure artificial intelligence is deployed ethically.
  • Conduct AI risk assessments: Systematically evaluate and categorise risks across artificial intelligence use cases.
  • Enable responsible AI testing: Integrate third-party tools and services for continuous risk assessment.
  • Implement ongoing monitoring and compliance: Build dedicated artificial intelligence compliance teams to oversee model performance and ethics.
  • Address workforce impact, privacy, and security: Ensure employees have the right skills to manage AI responsibly while safeguarding data and consumer rights.

APAC’s AI future: Balancing innovation and regulation

As AI regulations evolve, businesses must align artificial intelligence strategies with compliance mandates to maintain a competitive edge. Industry pioneers already place responsible AI at the core of their digital transformation, ensuring risk mitigation is a strategic advantage rather than a regulatory burden.

Also Read: How are the companies you invest in leveraging AI?

By embracing responsible AI, APAC companies can turn regulatory pressure into business value, positioning themselves for sustainable growth in the AI-driven economy.

The post Regulating AI in Asia Pacific: Can companies keep up? appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *