Artificial Intelligence (AI) is not new. Neither is reputational risk. While corporations have been utilising AI for a while now, most usage has been unseen. That is, data analytics, predicting customer behaviour, sales and marketing, or operations. Most of the time, clients and customers do not see the touch of AI in a corporation’s work.
For instance, a manufacturing company might use machine learning to collect and analyse an inhuman amount of data, identifying patterns and anomalies for which the company might choose to act upon to improve operations. As a customer of this manufacturing company, you probably will never see this AI in the works.
That might change with ChatGPT. The language model answers questions and assists with tasks. Now, students might use it to write an essay, or a software engineer to code, a traveller to plan an itinerary, and some are already using it as a search engine. And companies are planning to jump on this bandwagon.
Forbes reported that Meta, Canva, and Shopify are using ChatGPT to answer customer questions. They also found that Ada, a Toronto-based company that automates 4.5 billion customer service interactions, partnered with ChatGPT to further enhance the technology.
Furthermore, CNBC reported that Microsoft is planning to release technology so that big companies can launch their own chatbots using the OpenAI ChatGPT technology. That’s going to be billions of people interacting with ChatGPT.
It seems like a perfect partnership, a natural step in technology’s evolution.
A double-edged sword
But not everyone has jumped onto this tempting bandwagon. Some of the most AI-proficient organisations in the world are treading with caution, and for good reason.
Also Read: AI assistant or replacement? A PR pro’s take on using ChatGPT
As impressive as ChatGPT has proved thus far, Large Language Models (LLM) like ChatGPT are still rife with well-known problems. They amplify social biases, often negatively against women and people of colour. They are riddled with loopholes—users found that they could circumvent ChatGPT’s safety guidelines, which are supposed to stop it from providing dangerous information, by asking it to simply imagine it’s a bad AI. In other words, ChatGPT-like AI is fraught with reputational risk.
Harnessing technology with a healthy reputational risk mindset
But that doesn’t mean we have to totally dismiss AI like ChatGPT. Adopting new technology of any sort is bound to come with risks. So, how do we reap the benefits of AI whilst maintaining a healthy level of reputational risk?
The Reputation, Crisis and Resilience (RCR) team at Deloitte held a roundtable with industry leaders in financial services, technology, and healthcare industries to discuss how they approach the complex challenge of managing reputation risk.
Some of the points concluded were:
- Foster a reputation-intelligent culture: One of the key things discussed was creating a culture that is sensitive to brand and reputation. In any decision made, employees should have an internal compass that constantly asks: will this move the needle on the company’s reputation, and how? This can be cultivated through holistic onboarding and training programmes.
- Set a reputation risk tolerance: Setting a tolerance can help organisations make intentional decisions. No companies want a reputational hit, but few companies actually set tolerance levels for how much risk they want to take. When you have a threshold to stay within, it’s easier to deal with new technologies you might not understand fully.
Also Read: Singapore surpasses US in AI investment: Study
- Utilise reputation risk management: Measurement methods include regular surveys, media monitoring, and key opinion former research. However, leaders must find a balance between collecting the relevant data without drowning in it. Research shows that too much data collection can be counterproductive, distracting people from the bigger picture or creating a risk-averse attitude.
Since AI is and will continue to develop very quickly, knowing the intricate breadths and depths of AI all the time will be difficult. While we should keep abreast, what’s more important is focusing on cultivating a strong mindset around reputational risk so that no matter the tool—AI, social media, cryptocurrency—we can always manage the reputational risk involved.
For instance, instead of concentrating all effort and focus towards the dangers of a kitchen knife and how it might hurt you, learn about the general guidelines of personal kitchen safety, be it from the sharp edge of a knife or a pan-fire.
Similarly, instead of concentrating on the latest technological marvel and learning about every single reputational risk that might come with it, build a robust reputational mindset instead—one that will weather your organisation through any risky business.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Join our e27 Telegram group, FB community, or like the e27 Facebook page.
Image credit: Canva
The post With AI comes huge reputational risks: How businesses can navigate the ChatGPT era appeared first on e27.