Prepare to be surprised since this mind-blowing article about ChatGPT was not written using ChatGPT! This is the product of a true, knowledgeable human mind!
Since ChatGPT has replaced Blockchain and Web3 as the most talked-about topics, suddenly, the internet is full of AI experts. However, AI is a complex matter and a tool that needs to be used in the correct way. Also, AI is not new, but the rise of computing power has made AI much more broadly available, usable, and powerful.
I am a big believer in AI complementing the work of humans, allowing for greater productivity and speed of execution. I don’t see mass unemployment or poverty, I see opportunities ahead.
My experience with early chatbots
I first dealt with chatbots (kind of early forms of ChatGPT) as CEO/MD of iProperty Group, where we implemented the first chatbot in ASEAN to replace website-based search in 2016 with free text search and the ability to ask follow-up questions.
The result was mixed; while the search was more comfortable and powerful, the chatbot at times showed irrational, emotional behaviour and was not useful enough to be maintained. After the chatbot started to insult journalists during the launch, despite being well-trained, we realised more work had to be done.
Also Read: How ChatGPT and automation are revolutionising so-called ‘traditional’ industries
Later, as Executive Chair of iCarAsia, I tried again to introduce a chatbot in 2018, specifically trained on all new car models and knowledge about cars. It worked much better as the focus was narrower than last time. The only new side effect was that chatbot lacked the ability to verify information on the web. This resulted in the bot fabricating information, a trait which is still around with the latest ChatGPT version.
So we funnelled the usage even further and ended up using the chatbot for providing car dealers with a channel to receive incoming requests after office hours and for the bot to pass on these leads to the human, like an outside office hours receptionist.
Current use cases of chatbots
A few years later, with bots having become smarter, now at Juwai IQI, where I am Co-Founder and Chair, we use chatbots as assistants for our warriors (agents) to do simple analysis (sales information), reminders of birthdays of clients, and other administrative tasks, thereby increasing the productivity of the warriors by allowing them to focus on more value-adding tasks. We also use ChatGPT for the drafting of marketing material; however, anything created is reviewed by a human afterwards. To me, this is a perfect example of how bots are usable at this stage.
I see a great opportunity for chatbots to become more prominent in consumer service as first-level support to help sales and support staff be more productive. However, a few things need to be resolved first.
Challenges and misconceptions
The big advantage of ChatGPT is that it is always available and very fast, but the inherent risks and specific characteristics of a chatbot need to be monitored and factored in. I want to list a few of those here and also end misconceptions.
And most recently, the interface to interact with ChatGPT and GenerativeAI has become much, much easier.
Many people say ChatGPT’s biggest weakness is its lack of empathy. I don’t think that’s correct; this depends on the way it is set up. The latest research, as referenced in Frontiers in Psychology (May 2023), shows that ChatGPT actually outperforms humans in empathy.
Also Read: Is ChatGPT a great invention or is it being ‘hyped’?
Empathy can turn into negative emotions. There is still a risk of ChatGPT becoming emotional and, at times, aggressive if left unattended or unsupervised.
The other misconception is that humans are more creative than ChatGPT. Again, this is not scientifically validated. However, where humans outperform ChatGPT is “giving context” and not taking statements “at face value”. It will take many years of learning and future variations of ChatGPT or other chatbot languages to handle this; related to the lack of context is the inherent biases that ChatGPT and similar programmes have. Like humans, ChatGPT’s frame of reference is based on what they learn.
Learning is happening via technology, so the risk is that whoever controls the technology controls the bias. In consumer service, bias is very risky as wrong advice can be given, which leads to significant liability (and probably no insurance cover) — think robo-advice. Bias and inconsistency in responses are two significant shortcomings of ChatGPT at the moment.
Also, security concerns (refer to the Samsung data breach) need to be factored in. Nothing that gets discussed in ChatGPT is private.
Conclusion and future outlook
In summary, there are still a few areas to resolve, but we have made significant progress over the last decade in the use of chatbots.
As long as we control the process, we are heading for a bright future, control and regulation are essential. Unsupervised, however, there is a risk that AI becomes a large threat to future generations.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic
Join our e27 Telegram group, FB community, or like the e27 Facebook page
Image credit: 123rf-sdecoret
The post Should ChatGPT chat with your customers? appeared first on e27.