In the dynamic landscape of Southeast Asia, where technological innovation intertwines with diverse cultures and economies, the rapid ascent of Generative AI sparks a crucial dialogue among 20 leading experts. While everyone, from industry stalwarts to visionary entrepreneurs, is bullish about Generative AI’s transformative potential and impact, they are equally concerned about its potential misuse and ethical and privacy risks.
Peng T Ong of Monk’s Hill Ventures cautions against premature integration into mission-critical applications, emphasising the imperative of understanding what the “talking dog” of AI truly “thinks.” Across the spectrum, concerns voiced by Steve Brotman of Alpha Partners echo a call for thoughtful regulation driven by ethical considerations and a need to prevent potential misuse.
This feature encapsulates the nuanced insights of Southeast Asia’s AI luminaries, shedding light on the region’s stance towards shaping the responsible future of Generative AI amidst an evolving technological landscape.
Peng T Ong, Managing Partner, Monk’s Hill Ventures
The emergence of Generative AI has significantly surpassed a barrier in creating valuable and diverse content, and it does so at a relatively affordable cost. While it is currently unsuitable for many mission-critical applications, the prospect of getting there may be closer than we think.
I look at it this way––the ‘dog’ is ‘talking’. But just because the dog is talking doesn’t mean we should put it behind the wheel of a truck or at the trading desk of a billion-dollar hedge fund. We don’t know what the dog is ‘thinking’. A straitjacket must be put around the AI networks before we let them touch anything near mission-critical.
My concern is that folks behind this AI boom aren’t thinking sufficiently or thoughtfully about fundamental world engineering requirements before connecting these things to the physical world.
One possibility is for straitjackets to be implemented through computationally tractable algorithms — software for which we can deterministically predict behaviour. Rule-based expert systems will return to vogue, or perhaps using knowledge graphs (data representing knowledge) will become more pervasive.
Steve Brotman, Founder and Managing Partner, Alpha Partners
There is a need for thoughtful regulation in this space. The primary reasons for this stem from the potential risks and ethical considerations associated with deploying robust AI systems.
Generative AI can create content and make decisions with a level of sophistication rapidly approaching human-like capabilities. While this presents incredible opportunities for innovation and efficiency, it also raises significant concerns regarding misinformation, privacy, intellectual property rights, and the potential for misuse.
Also Read: ‘Bringing world-class AI talent into Singapore can substantially enrich the industry’
Regulation in this context is not about stifling innovation or curtailing the development of AI technologies. Instead, it’s about ensuring their development and deployment are done responsibly and ethically. Effective regulation can help establish guidelines for the safe use of AI, protect individuals’ rights, and prevent harm that may arise from these technologies’ misuse or unintended consequences.
Rudi Hidayat, Founder and CEO, V2 Indonesia and CIX Summit 2024
Implementing regulations could provide a necessary framework to address various concerns surrounding Generative AI, including ethical use, accountability, and potential misuse. Policymakers aim to establish guidelines that ensure these tools positively contribute to society without causing unintentional harm. It’s essential to strike the right balance in regulation to encourage the responsible development and application of Generative AI. The focus should be on mitigating specific risks rather than imposing overly restrictive measures that impede progress.
It’s crucial to have a platform that brings together professionals, business leaders and AI practitioners. This space would enable education and facilitate conversations, especially on Generative AI and its regulations, to contribute to the broader discourse on the role of AI in shaping the future.
Bharat Buxani, Senior VP – Marketing, 99 Group Indonesia (Rumah123.com and 99.co)
The question of whether Generative AI should be regulated is multifaceted. We acknowledge the pivotal role Generative AI plays in shaping the digital realm, especially within the real estate sector. While recognising the importance of potential regulations, our focus remains on using AI to elevate consumer experiences and address their evolving needs.
The responsible utilisation of Generative AI can catalyse innovation, providing personalised, efficient, and groundbreaking solutions for our users. In navigating this landscape, it is essential to strike a delicate balance, promoting innovation while proactively addressing ethical considerations. By doing so, we aim to maximise the positive impact of Generative AI within the real estate industry and beyond.
Minhyun Kim, CEO of AI Network
Generative AI should be regulated to address the growing ethical concerns, privacy issues, and misinformation risks. Without some form of regulation, we run the risk of generative AI developing in a manner that contradicts the best interests of society. This includes clickworker exploitation, copyright infringement, and training data manipulation. There is also the ever-present threat of sudden mass unemployment. Regulation would help smooth the transition and give societies time to adapt.
The challenge, of course, is how to regulate it. Traditional regulatory processes may be too slow to keep pace with AI’s rapid advancements. Alternative governance mechanisms, such as decentralised autonomous organisations (DAOs), have potential because they are designed to govern open-source software with better agility and responsiveness. They can also bring together more diverse stakeholders with acute knowledge of the technology and its potential impacts.
Hayk Hakobyan, CEO and Co-Founder of Bizbaz
It is best to regulate generative AI to mitigate cases of potential misuse. There are four key areas for regulation:
- Create/promote harmful content, such as deepfakes or other forms of misinformation. There are several examples of this for celebrities today.
- Create/promote discriminatory or biased content. For example, a generative AI model that is trained on a dataset of biased news articles could generate text that is also biased. This could hurt people who are marginalized or discriminated against.
- Create/promote privacy violations. For example, a generative AI model could be used to create fake images or videos of people without their consent. It could also inadvertently share or use private or confidential information of individuals.
- Initiate complex cyber-attacks, for example, by generating realistic phishing emails or creating fake identities for malicious purposes. Regulation could address these security risks and define standards for deploying generative AI systems.
To solve this, generative AI models should be tested for bias and discrimination, used to respect people’s privacy, labelled as such so that people are aware they are interacting with an AI system, not a human, and established a certification for generative AI models.
Dusan Stojanovic, Founder of True Global Ventures
The AI industry needs to have a more open playfield. Regulation is required as soon as possible and is way more critical than regulating the blockchain industry. AI will have a much more crucial impact on our future than blockchain.
Khos-Erdene Baatarkhuu, Group CEO of AND Global
Generative AI, exemplified by the remarkable rise of ChatGPT, offers a world of promise and complexity. The need for regulation emerges from the duality of AI, where immense potential meets significant challenges.
A comprehensive approach includes establishing AI governance bodies, nurturing public-private collaborations, and investing in AI literacy. Dealing with the technical and ethical aspects requires the collective wisdom of stakeholders from various backgrounds.
Tal Barmeir, Co-Founder and CEO, Blinq.IO
I firmly believe in the necessity of regulating generative AI, with a primary focus on transparency.
As a CEO, I understand the ethical obligations associated with AI deployment. Regulations are a cornerstone for establishing ethical standards, ensuring accountability, and fostering public trust.
Looking at financial services, for example, where AI is integral to risk assessment and decision-making, regulations are crucial. Individuals deserve to know when AI algorithms influence economic outcomes, promoting fairness and ethical use.
Also Read: What will be the key trend in technology next year?
Moreover, in legal and judicial systems, ensuring transparency is paramount. Regulations can set protocols for disclosing AI involvement in evidence or documents, preserving the integrity of legal proceedings and bolstering trust in the justice system.
Finally, regulations can mandate clear disclosure when AI is part of content creation in news and media, where misinformation is a growing concern. This is essential to combat the spread of misinformation and enable the public to distinguish between human-created and AI-generated content.
Kenneth Tan, Co-Founder and CEO of BeLive Technology
I argue against the popular notion that regulation stifles innovation.
For one, there are massively beneficial second-order effects of enforcing safeguards: trust is fostered, and organisations are more likely to accelerate the adoption of AI tools.
This, in turn, encourages more innovation as the ecosystem builds itself around early success stories. Leaders emerge, entrench themselves, and in some cases, establish monopolies. You would then witness the “disruptors” who break the spell of dominance and democratise the use of that technology — with consumers being the ultimate beneficiaries. This simply cannot happen without the trust of organisations and a regulatory body to turn to.
However, regulation cannot be mandated with a broad-stroke, shotgun approach. We should systemically discuss specific AI technologies, understand what problems they are causing, and, most importantly, who these problems affect.
Alvin Toh, CMO of Straits Interactive
On a global level, regulation is already underway. China’s regulations, for instance, address AI-related risks and introduce compliance obligations on entities involved in AI-related business. Three specific laws to regulate AI exist in China — the Algorithm Recommendation Regulation, Deep Synthesis Regulation, and the Generative AI Regulation.
Recently, representatives from 30 countries gathered to discuss the significant opportunities and risks posed by AI as signatories of the Bletchley Park Declaration. Emphasising the need for AI development to prioritise human-centric, trustworthy, and responsible approaches, the declaration acknowledges the transformative potential of AI across various sectors but also highlights the associated risks, particularly in domains like cybersecurity and biotechnology. The declaration stresses the urgency of addressing these risks, especially concerning powerful and potentially harmful AI models at the technology frontier.
The signatories commit to international cooperation, inclusive dialogue, and collaborative research to ensure AI’s safe and responsible deployment, recognising the importance of engagement from diverse stakeholders, including governments, companies, civil society, and academia. The declaration also outlines a comprehensive agenda focusing on risk identification, policy development, transparency, evaluation metrics, and establishing a global scientific research network on frontier AI safety, and the nations pledge to reconvene in 2024 to advance these objectives further.
Thus, it is imperative for firms intending to deploy Generative AI in their products to have a good AI governance structure in place in anticipation of AI regulations currently being considered by governments worldwide. You don’t want to invest huge dollars and be on the wrong side of the international standard guidelines and regulations for this rapidly developing technology.
Mauro Sauco, CTO and Co-Founder, Transparently.AI
On the ethical side of things, there are biases and discrimination in content generation that we want to make sure that we avoid, so regulation could help us prevent content that can hurt other people based on race, gender, or religion.
There are issues with copyrights, especially with generative AI. We know this is one big problem in generative AI: Who owns the material that comes out, given that it is a derivative of all the material that comes? So that’s just some of the considerations.
Also Read: AI will have more impact on our future than blockchain: Dusan Stojanovic
We can prevent many of these issues by putting some regulations around this. Policing the output is one thing, but having these regulations will also impact the machine-learning process of how people are training their AI models to make the work more nuanced. Regulation, in this sense, addresses content generation from a systemic standpoint.
We are for regulations that do not stop or hurt innovation. We are for flexible regulations that can pivot depending on the circumstance.
Apichai Sakulsureeyadej, Founder and CEO, Radiant1
Generative AI needs to be regulated by ethical standards. We need to adhere to the guidelines of the Personal Data Protection Act. It is a good start while combining it with a more comprehensive approach.
Sourabh Chatterjee, Group CTO, Oona Insurance
Generative AI is the world’s fastest-developing technology and offers unparalleled opportunity. Given the beast’s nature, Gen AI will inevitably be regulated, with many policymakers seeking to introduce AI-specific legislation. However, the approach must remain pragmatic as this technology undergoes continuous evolution from where it is today.
Ultimately, striking a delicate balance involves establishing fundamental guardrails without compromising openness, innovation, and the myriad benefits of Gen AI.
Sanjay Uppal, Founder and CEO, finbots.ai
Generative AI’s meteoric rise and its potential impact on our way of living calls for a governance compass that combines Responsible AI practices by developers of AI and regulatory measures that would set a prudent path forward. Transparency isn’t optional; it’s the cornerstone of AI reliance, demanding globally endorsed standards illuminating AI’s inner workings.
However, AI is in its infancy and evolving rapidly. Therein lies the challenge. Just like you would put guardrails for a human child’s safe growth, you equally want to allow space for innovation and self-expression. The development of AI would be no different.
As regulators and governments seek to contain the negative fallout of reckless use of AI, the challenge will remain in identifying how tight or lax these regulations should be.
Lim Sun Sun, Professor of Communication & Technology at SMU
From the perspective of commercial enterprises, from bootstrapped startups to Fortune 500 companies, generative AI offers abundant, powerful tools for streamlining and expediting business processes. Since businesses are always motivated to improve their bottom lines, the efficiencies of digitalisation and AI adoption have tremendous appeal.
Also Read: Transforming customer service: AI’s ‘artificial empathy’ holds the key
However, not all companies will pay equal attention to the risks of leveraging AI, such as algorithmic biases, privacy protections or faulty automated decisions. Regulatory oversight is thus vital for ensuring that businesses adopt AI with the necessary safeguards to ensure ethical and responsible use.
Jerrold Soh, Assistant Professor of Law at SMU
Regulation is typically needed when consumers lack the ability and producers lack incentives to take precautions against a product’s risks. Like mass-produced food, today’s Generative AI systems are made through complex processes involving multiple corporate stakeholders, data inputs, and computational techniques that consumers have little visibility into. Producers have incentives to overstate their nutritional benefits while avoiding questions on ingredient quality.
The speed and scale at which such systems are being developed, deployed, and used – including to generate harmful content like pornographic fakes, commercial rip-offs, and political lies – suggests that if left unchecked, generative AI could poison society’s vital information streams. –
Max Del Vita, CPO, MoneySmart Group
Generative AI is still relatively nascent and evolving, making the regulation question nuanced.
On one hand, excessive regulation at this early stage could stifle innovation and slow down the pace of discovery and development. On the other, the technology poses risks, especially regarding the potential for misuse and impersonation.
A middle-ground approach is prudent. Instead of heavy-handed regulation, establishing guiding ethical principles can be the initial framework to ensure generative AI’s responsible use and development. Technologies like blockchain can also play a complementary role by enhancing the trustworthiness of AI-generated content. By providing a tamper-proof, decentralised data record, blockchain can verify and trace the source of AI-generated material, adding a layer of security and reliability that contributes to responsible use and mitigates potential risks.
Francis Lui, CEO of NexMind
As a founder in the space of AI, it is not necessary to regulate Generative AI at this stage; it is too complex to regulate entirely, as regulations at this early stage will likely hinder innovation and growth in this space.
Given the current pace, AI technology will likely change rapidly, making it challenging to create adaptable rules.
Ensuring that Generative AI is developed and used responsibly and ethically is essential. Within the development process of Generative AI, self-regulatory measures can go a long way in ensuring the entire onus of comprehension doesn’t lie with governmental regulators, thus prohibiting growth.
Simon Quirk, Co-Founder Gracenote.ai
AI will have many positive impacts that will benefit businesses, governments and humanity, especially with the right guiding principles. But equally, when used or allowed to act in ways that are not for the greater good, the potential for adverse consequences is great. Therefore, like anything that can be used for good or bad, implementing rules is key to making sure AI usage aligns with societal norms.
Marianne Winslett, Venture Partner at R3i Capital
It’s not too early to ensure that AI software development follows the best practices of software development anywhere, including proper collection and documentation of training data and ample testing using both standard and adversarial scenarios. Along with respect for privacy and data ownership, these practices are also in developers’ best interests if they wish to minimise future litigation. Beyond these points, it’s a bit early for regulations specific to generative AI, which is still in its infancy.
Nelaka Haturusinha, Director at Striders Global Investment
Regulating generative AI is essential to harness its transformative power responsibly, safeguarding against unintended consequences and ensuring ethical boundaries guide innovation.
Michelle Duval, CEO and Founder of Fingerprint for Success
Generative AI groups must unify and work together to be a force for good. The only way this can be achieved is through active collaboration and governance by the world’s most influential people who support both optimistic and pessimistic views on the significant impact AI can and will have on humanity. Through this group, we must work towards regulation.
Kelly Forbes, Exec Director at AI Asia Pacific Institute
In preparation for regulation, Governance mechanisms need to be in place. Sectors such as education and healthcare are priorities where risks could potentially outweigh benefits.
Warren Leow, Group CEO at Inmagine
Generative AI should be self-regulated, with responsible platforms playing a role to ensure creators can be paid or empowered accordingly. It would be difficult for governments to regulate across geographies.
At Inmagine, we believe generative AI is a game changer that would upend many business models. Ultimately, the dust will settle based on what the consumers want.
Hence, our responsibility lies in balancing user expectations and ensuring we empower our community and contributors to earn amidst a changing market.
Gullnaz Baig, Executive Director, Angsana Council
Multidisciplinary collaboration between those with technical prowess and those who understand society is required to build AI that is safe and equitable by design. While this need should be obvious, we should not expect it to come naturally without considerable pressure.
Given the current race to advance in AI development, a multidisciplinary approach, which could slow down the process, is considered cumbersome. Product development sprints do not lend themselves well to the postulations of policy teams. This is as true for the big tech companies racing to get ahead with their own Foundational Models as it is for startups integrating AI into their offerings.
Also Read: AI revolution: Balancing human empathy and robotic efficiency in customer service
So, we are either left with relying on tech leaders to do the right thing, if they can figure it out, or on states to develop punitive regulations to keep AI development in check.
Yet, regulations, even those as robust as the EU AI Act, are only useful as accountability frameworks. While they are the state’s most vital tool to wield against tech companies, they are also weak. They are often reactive and may struggle to keep pace with the rapid advancements in AI technology. In some cases, regulation kicks in only when the harm has been done.
There is a third option. It enables the state to engage with technologists at a more meaningful level. AI can be developed to check other AIs, ensuring the ecosystem is safe overall. An example is Detect GPT, an AI that helps verify where a text is AI-generated. States should view AI development as an ecosystem. Even as they develop regulations to check risks and harms, they should incentivise the development of AI for safety. National AI strategies should include specific provisions to co-invest in safe AI technology development, seed funding research into AIs to check for discrimination, violation of IP rights, etc., and even provide visa and tax incentives for companies that concentrate on building AI for safety to specifically ensure that on balance the ecosystem is a safe one for all.
The post Experts advocate thoughtful regulation for the rapid rise of Generative AI appeared first on e27.