Posted on

The AI revolt: How our love affair with technology could turn into a hate story

In September 2023, Meta made a groundbreaking announcement: the introduction of AI in beta, an advanced conversational assistant available on WhatsApp, Messenger, and Instagram, soon to be integrated into Ray-Ban Meta smart glasses and Quest 3.

These AI entities are not your typical virtual assistants; they’re designed to have more personality, opinions, and interests, making interactions far more engaging and enjoyable. What’s more, they’ve enlisted cultural icons and influencers such as Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka to lend their voices and personalities to these AI companions.

Screenshot from Meta’s website page Introducing new AI experiences

Screenshot from Meta’s website page Introducing new AI experiences

Challenges and paradoxes in AI-human interaction

In an age where virtual assistants and entertainment are increasingly powered by AI, it’s not hard to imagine a future where people grow weary of the digital realm. The novelty of interacting with artificial intelligence, whether as assistants or characters in our entertainment, may soon wear off.

However, the path to this future of AI-human interaction is far from straightforward. Consider the recent experiment conducted by Joanna Stern, a columnist at The Wall Street Journal. Stern replaced herself with AI-generated voice and video, diving headfirst into a series of challenges, including creating a TikTok video, making video calls, and testing her bank’s voice biometric system. The results were nothing short of eerie.

As Stern navigated through her tasks, she found herself face to face with technology that had become astonishingly humanlike in its voice and facial expressions. The AI she interacted with mimicked her voice with almost perfect precision, making it difficult to discern from a real human voice.

Also Read: AI, the era of the 1-person unicorn (and massive job losses)

However, when it came to the video clone, there was a stark contrast. Despite the nearly flawless voice cloning, the video clone left much to be desired. It had difficulty reproducing the subtle nuances of movement and facial expressions, and the visual aspect did not match the atmosphere and context. Due to imperfect imitation, the final result was clumsy, caused ridicule, and was immediately exposed.

Human clones: Blurring boundaries and the verification conundrum

This experiment underscores a paradox that may define our future interactions with AI-driven human visualisations.

Despite a rather unsuccessful initial attempt at replicating human behaviour, technology is bound to catch up with our expectations of AI interaction. In return, people will look for more authentic experiences that truly engage our senses and emotions.

This craving might lead us to seek out the next wave of explosive interest in human avatars — clones of real personalities, historical figures, and celebrities, including our living or deceased relatives or friends.

These clones will replicate the appearance, voice, personality and even simulate the thoughts and reasoning of their real-life counterparts, blurring the boundaries between reality and simulation.

The future of AI-human interaction: From fascination to weariness

At this stage, a new obstacle will arise — the verification of clones. After all, you’d prefer to converse with or seek advice from a clone of Keanu Reeves if it’s verified by Keanu himself, wouldn’t you? Or discuss the current political situation with a Lincoln clone who has been verified to match the mannerisms, tone, and thought processes by a group of historians or institutes.

Also Read: Embracing AI’s promise: Navigating the future of marketing

Just as every secure website has an HTTPS connection with a verification badge, every clone must have a verification code so that we know that this clone is “authorised” to act on behalf of a specific person.

In addition to Meta’s announcement, it’s worth noting that IT startups like Synthesia and HeyGen are getting closer to creating engaging AI avatars. These companies are at the forefront of pushing the boundaries of AI-human interaction, offering the promise of even more convincing and engaging digital personalities.

At the moment, technology is still far from being able to generate a video stream with human-like movements. For this, AI needs to “understand” how to match movements and facial expressions with text and, especially, context. This could take another five to 10 years of development.

Another issue is the computational performance required to do this in real time. Although this also seems achievable, we might soon have a clone that is visually indistinguishable from engaging in simple conversations.

As the years pass, even these astonishing AI clones will likely lose their lustre. People will grow tired of the predictability and limitations of these replicas, missing the unpredictability and quirks of genuine human interaction. It’s a paradoxical scenario where we yearn for authenticity but find ourselves in a world dominated by artificial beings.

While it is difficult to imagine at the moment, sooner or later there will come a time when people start to resent the omnipresence of AI-driven visualisations. They will begin to long for the days when human interaction was unadulterated by technology and when genuine emotions and imperfections defined our relationships. But by then, AI visualisations will have permeated every aspect of our lives, from work to entertainment, making escape nearly impossible.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic

Join our e27 Telegram groupFB community, or like the e27 Facebook page

Image courtesy of the author

The post The AI revolt: How our love affair with technology could turn into a hate story appeared first on e27.