Posted on Leave a comment

Balancing trust and verification: Navigating the rise of AI

AI has gone from being a flashy buzzword to being the next technological phenomenon, making significant strides across various industries – and software development is no exception. In Asia Pacific, software revenue for generative AI is estimated to reach US$3.4 billion by the end of 2024 and is projected to exceed US$18 billion by 2028.

At the same time, however, businesses are still trying to navigate the integration of AI into their operations amidst scepticism and concerns about its risks. When it comes to software development, for instance, businesses can’t rely fully on what AI coding assistants produce at face value.

Even so, it’s critical that caution doesn’t inhibit progress and momentum. Those leveraging AI have a responsibility to incorporate the right guardrails and mechanisms to ensure the effective use of the technology and avoid falling behind the competition, especially as emerging innovations like AI coding assistants progress rapidly.

To achieve that, there is already a blueprint to guide them on their journey – and it lies in the form of a “trust, but verify” approach. This involves the employment of AI with the verification of its output through human review, allowing businesses to take advantage of the technology without excessive risk. Organisations – from tech giants to governments – are already adopting a similar framework.

In Singapore, for instance, the government has developed “AI Verify”, a testing framework and software toolkit for responsible AI use.

Anticipating and managing the risks of AI

It’s no surprise that AI will herald a new era of productivity, removing the burden of mundane, repetitive tasks. This frees up employees’ time to collaborate, to be creative, and to think outside the box. With 83 per cent of developers experiencing burnout due to an increased workload, AI coding tools can potentially offer much-needed relief and raise job satisfaction.

However, AI can also create a gap between individuals who leverage it for productivity and those who use it merely because it is available. This can fracture teams and lead to misalignment in output and accountability challenges (i.e., code ownership).

If AI is not leveraged properly, the potential risks can extend beyond individuals and teams to their organisations and the business at large. While the use of AI coding tools is rising, companies are innovating and competing on a foundation of software. Unfortunately, software can be plagued by bad code, which contributes heavily to technical debt that is difficult and costly to address. Bad code in itself is a trillion-dollar problem. AI could exacerbate the issue by accelerating software development without regard for quality.

Also Read: AI-powered recruitment: Revolutionising hiring in Southeast Asia

A recent study from Microsoft Research found that 22 coding assistants often falter beyond functional correctness, hinting at fundamental blind spots in their training setups. Like human-generated output, AI-produced code can have security, reliability, and maintainability issues. No matter how code is developed it should be reviewed for quality and security.

This fact will remain true for the foreseeable future: all code, whether human-written or AI-generated, must be properly analysed and tested before being put into production. While developers can turn to AI to produce more lines of code quickly, the right checks should still be in place to ensure their code remains a foundational business asset. This means taking the necessary steps to ensure that the AI-generated code is clean.

AI guardrails: Ensuring safe and trustworthy AI

According to a 2024 study by Microsoft and LinkedIn, more than eight out of 10 knowledge workers in APAC are embracing AI tools. Now, more than ever, it’s essential that business leaders understand where and how AI is being used in their organisations. Whether the use of AI is approved or not, individuals are already leveraging the technology. Organisations must think through their investments and what governance they need to put in place to protect the business while enabling teams to innovate.

Successful AI adoption requires CIOs and leaders to create an AI culture rooted in good guardrails and promote usage that leads to actual productivity. While it can sound like a daunting and nebulous task, the starting point is actually much more accessible than one might think. For starters, organisations can consider trusted software development frameworks, such as NIST’s Secure Software Development Framework, and certify a list of approved AI tools.

It should be stipulated especially what reviews should look like for different AI use cases to ensure that anything being released or put into production is quality and secure. For example, when it comes to AI coding assistants, code analysis tools like Sonar can integrate with popular coding environments and CI/CD pipelines for in-depth insight into the quality, maintainability, reliability, and security of Gen AI code.

Also Read: How are the companies you invest in leveraging AI? 

More importantly, the use of AI also needs to be considered holistically, and not siloed to a specific department. While CTOs and CIOs/CISOs must set the direction, all internal stakeholders must be allowed to weigh in.

Coding responsibly

The mindset of “move fast and break things” isn’t effective when you consider the potential cost of needing to fix any output generated by AI, but it’s also not possible to slow down the pace of innovation. And there’s no doubt that AI can provide businesses with a competitive edge.

Organisations must remain proactive in their holistic evaluation of risk and have proper governance in place. They also must invest in the right tools to support different teams in taking advantage of generative AI without increasing risk.

Taking a “trust but verify” approach is important across the spectrum of AI use. Whether in software development or other aspects of business operations, teams must ensure they are not blindly accepting what is generated by the technology. Everything needs to be considered in the business and societal context, and that shouldn’t be lost amid the hype of AI.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

Image credit: Canva Pro

The post Balancing trust and verification: Navigating the rise of AI appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *