Toxic positivity? That seems a bit harsh when talking about Artificial Intelligence (AI), especially given that all modern businesses aspire to be data-driven in both strategy and execution.
Indeed, over the last decade or two, data has become almost universally regarded as a key corporate asset and an essential input to quality decision-making. And now, with the rise of Generative AI, there is an argument to be made that we’re reaching new levels of insight and productivity.
That said, the increasing importance of data as an asset has resulted in significant overheads being required for its protection. Whether we’re talking about security, regulatory obligations, or simply data integrity, it’s clear that there are plenty of risks and concerns associated with data and its downstream contribution to AI.
In recent times, the concept of data as a liability, or even “ethical AI”, has also raised its head, albeit usually in terms of its strategic value and what might happen if it was compromised in some way. The prevailing analogy here changes from “data is the new oil” to “data is like uranium”, both powerful and dangerous. Savvy data practitioners now realise that governance, while never sexy, has taken on a new and heightened importance in the age of AI.
What effective data practitioners know: Context matters
Yet that’s not quite what we’re talking about here. For me, the idea of toxic positivity being applied to data takes two forms – context and presentation. Likewise, the broader concept of toxic positivity is a social construct that appeals to popular culture and the zeitgeist of today — why wouldn’t it pertain to data and especially AI, especially given its more personal interface?
Also Read: With AI comes huge reputational risks: How businesses can navigate the ChatGPT era
Thinking firstly in terms of context, it’s easy to see how many data practitioners become enamoured with their analyses and reports and are blinded to more mundane considerations like relevance and impact. This type of toxic positivity stems from the idea that data is the sole (objective) truth and is, therefore, unassailable. Overconfidence in your data and algorithms breeds an unwarranted certainty around the insights and can yield fatally flawed decisions.
The solution to this problem is to maintain a healthy scepticism towards prima facie answers and to apply common sense and experience in equal measure. In a flashback to my management consulting days, data should be used to prove or disprove the hypothesis, not the other way around.
An AI wake-up call: always question the path of least resistance
In recent times, though, a more insidious threat to decision-making integrity has emerged in the form of Generative AI solutions and, more specifically, their user interfaces. The challenges with AI are both many and well-identified and include a lack of explainability, poor transparency, and variable data quality, to name a few.
Less obviously, a “positivity” problem now presents itself when we consider the form (or presentation) of AI’s responses – they are delivered in such a prescriptive and authoritative manner as to silence any debate on their value or correctness. Here is where the foibles of the technology tend towards positive toxicity – attractive, easy answers that are presented as compelling and “right” answers are the easy option for time-poor analysts and passive insight consumers.
Also Read: Can generative AI usher us into the gilded age of ad creativity?
This problem is much harder to solve, primarily because Generative AI has such broad applicability, with no clear signature of its usage. Likewise, without anyway of knowing if answers are right or wrong, users will naturally lean towards the path of least resistance. Unfortunately, once headed down this path, it is very hard for them to turn back.
To get the most value from AI, never forget the data fundamentals
The assertions above aren’t intended to question the value of AI, data, or data-driven decision-making for that matter. The right knowledge, thoughtfully applied, can illuminate a decision with new possibilities. Rather, it’s to highlight one of the fundamentals of analytical practice which has always existed – understand your business first, and only then seek relevant and considered insights.
Your business doesn’t exist to “consume insights” or to “leverage AI”. It exists to satisfy customer needs while simultaneously generating profits. Thus, the task of stewardship falls to the thoughtful AI and data practitioner who understands how these capabilities support the creativity, productivity, and tenacity required for business success.
To paraphrase Pablo Picasso’s famous quote from 1964 “Computers are useless, they can only give you answers”. The enlightened leader (and analyst) should, therefore, spend just as much time asking “why” the analyses matter versus “what” the AI says.
Toxic positivity comes in the form of the attractive soapbox spruiker standing on the corner, telling you they have all the beautiful answers (whatever the question may be). At Domo, we often get our customers to focus on “data curiosity” – it’s never been more important.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Join our e27 Telegram group, FB community, or like the e27 Facebook page.
Image credit: Canva
The post Why AI needs context and curiosity, not toxic positivity appeared first on e27.