Posted on

‘AI is a race for innovation; regulation will only develop effectively once winners are announced’

Sanjay Sehgal

Sanjay Sehgal is a seasoned entrepreneur, venture and angel investor, and philanthropist who currently serves as the Chairman and CEO at MSys Technologies.

As countries like India contemplate regulatory reforms for AI, Sehgal provides valuable insights into the ever-evolving landscape of AI and its ethical considerations and shares his perspectives on prioritising key principles that strike a delicate balance between safeguarding end-users and fostering the continual progress of the technologies.

Edited excerpts from the interview:

As countries like India consider drafting regulatory reforms for AI, what key principles do you believe should be prioritised to ensure a balance between protecting end-users and fostering the progress of AI technologies?

India should have a light touch while framing these laws because the recently launched Telecommunications Bill already covers stringent steps towards user privacy and protection. The country can also appoint specific advisories to fight the problems of deep fakes and AI-manipulated content.

Also Read: Experts advocate thoughtful regulation for the rapid rise of Generative AI

However, creating very strict laws like in the EU, for instance, can seriously restrict the scope of AI in India. The focus should also be on the big picture, where the government can work closely with tech companies to upskill the nation to use AI as a tool rather than demonise it for its pitfalls.

The recent lawsuit between The New York Times and OpenAI brought attention to the potential use of original content to train generative AI models. What ethical considerations should AI companies consider when developing and training their models, particularly concerning using copyrighted material?

Data is the atom with which the proverbial body of AI is created. So when Generative AI uses large amounts of data to train its model to share accurate responses within seconds, we must also understand that gathering such a quantum of data ethically is a challenge.

More such cases will come to light as ethics plays catch-up with the leaps of innovation, a common occurrence when any new technology is implemented. We simply learn only from our mistakes.

The solution should come from the creators, where specific teams are created to understand the sources of data collection, the intervention required to obtain them ethically, and experts who can advise on ethical means to source data. Eventually, only AI companies can resolve this by holding themselves to the highest standards.

In the context of generative AI, who should be responsible for ensuring that AI models are trained on data that adheres to ethical standards? Should this responsibility lie with the developers, regulatory bodies, or a combination of both?

AI developers and the experts running the company should create standard practices to source data ethically and maintain transparency about it. Regulatory bodies will attempt to create laws that protect both users and the data, but the nuances of AI are ever so dynamic and vast in nature; more comprehensive laws may only be formed in hindsight of an incident. At best, the bills can emphasise transparency to avoid such circumstances.

Privacy concerns have been raised, especially in generative models. How can regulations effectively address these concerns while allowing AI technologies to innovate and develop?

Each nation is racing to create its own version of bills to curtail the privacy concerns of AI.

The EU Act has taken a risk-based approach that creates a certain level of flexibility for Generative AI companies that fall in the low-risk category.

The UK has a pro-innovation approach that creates very broad outlines of framework around AI.

China, however, one of the first countries to enact relevant legislation, has very specific rules around Generative AI.

Also Read: AI companies raised record US$50B in 2023 globally: data shows

In a nutshell, while the experts may mull over an ideal scenario to address concerns over AI technologies, globally, innovation will not be stifled by any regulatory bodies. It is a race for innovation, and regulation will only develop effectively once the winners are announced.

Ownership of content generated by AI models is a complex issue. How should this be legally addressed to ensure fair compensation and acknowledgement for human or machine creators?

The current Copyright Act of 1957 in India does not address AI-generated content or acknowledge AI as an author. An important factor determining the work’s authority is originality, which is a wrong yardstick to protect content creators/coders that aid in creating the responses. The current debate around the world is about the ethical implication of naming AI as a person to protect content rights, but the real argument should not be about protecting the tool but the creators of the tool.

Arguably, there may be some protection under the term ‘derivative works’ that qualify protection if it introduces significant alteration to the original material, with the copyright holder’s explicit permission or by using works in the public domain. Again, this is a very restrictive term for a technology with myriad use cases. A unique terminology and set of regulations is required to protect the creators and the technology enabling such intuitive solutions.

With your VC and angel investor background, how do you assess the impact of legal battles between publishers and AI companies on the investment landscape for AI startups? What considerations should investors take into account?

Historically, every innovation met with legal challenges and privacy breach incidents that created awareness about the pitfalls of it in hindsight.

As an investor, I recognise AI as a great innovation tool. Still, the VC industry outlook demands tangible growth, sound business models and, more than anything else, a promised fast track of the product’s potential to be acquired by tech giants.

The rest are minor obstacles usually resolved if the company’s potential is unfettered.

As a philanthropist, how do you envision the responsible and ethical use of AI technologies contributing to societal well-being, and what initiatives or projects should be prioritised?

AI could be a means of distribution for bringing back the age-old practices of heartfulness or compassionate mindfulness. The art of well-being was conceptualised by our ancestors thousands of years ago, and it is now a forgotten practice.

The term ‘well-being’ is misused as an elaborate scam in current times to woo the masses into buying products that claim to improve their lifestyle. Considering Generative AI is one of the biggest platforms for disseminating information to the masses, it should be used to create proper information channels about well-being and taking care of mental health.

Also Read: ‘Bringing world-class AI talent into Singapore can substantially enrich the industry’

Also, its advent was for the technology to perform mundane tasks that would free us up for more philanthropic and self-development activities. Moreover, as a philanthropist, I would also urge looking at balancing the use of AI with real human intelligence in the south side of the world where human resources would continue to be more available at cheaper rates than using AI.

Considering the global nature of AI development and deployment, do you see the need for international collaboration in establishing standards and regulations for generative AI, or do you think a more localised approach is appropriate?

International collaboration is a desired outcome but not a practical expectation in the current geopolitical scenario. High income and labour wage disparity render this technology costly compared to the cost of hiring manpower. Also, AI is at its nascent stages, and it will take years of development and hyperlocalised solutions that can replace the manpower for mundane tasks while being cost-effective.

A unified outlook and implementation will eventually emerge as the technology evolves and our understanding of it with it.

The post ‘AI is a race for innovation; regulation will only develop effectively once winners are announced’ appeared first on e27.