
While AI is increasingly becoming a common practice in organisations, the ability of organisations to develop better growth prospects and more efficient strategies and customised consumer experiences also emerges clearly. AI brings unseen possibilities, such as automating decisions and delivering distinctively targeted advertising campaigns.
However, this technological advancement has not come easy without a set of problems. For instance, AI has alarmed privacy issues, the main reason being that systems demand personal data as inputs, and this acts as a violation of customer trust between firms and customers. Apart from that, the possibility of involuntary biases in the algorithms themselves, whereby an AI system unwittingly designs bias into the algorithm, resulting in “unfair” results, could go a long way in straining brand image and consumer trust.
Regardless, there is no doubt that AI has profoundly transformed the rapidly growing marketing technology (martech) industry, especially in the face of these aforementioned ethical issues. While AI optimises automation, increases customer satisfaction and improves analytical capabilities, organisations have to find the right balance between innovation and accountability. In this article, we will explore what these issues are and how they can be effectively addressed.
Essential ethical considerations to address
As AI continues to penetrate different industries, it has never been more urgent to raise awareness and ensure that AI is properly implemented. According to Forbes, more than 51 per cent of company leaders believe that AI transparency and ethics are critical to their operations, and 41 per cent of top executives have halted the deployment of AI technologies due to a potential ethical concern.
Transparency in the context of AI means that the functioning of artificial intelligence should be perceptible and understandable, while the process of making decisions should conform to ethical norms and general human values. One can find an example of transparency in the case of many companies employing AI to understand the behaviour of customers, targeting their advertisements and the overall marketing management.
To support the increase in transparency, some organisations have started giving customers more information about the usage of their data. AI transparency is also important where the risks are especially high that the consequences of AI decisions will impact lives or have large social outcomes, such as in healthcare and finance.
Another ethical consideration that has to be discussed is the issue of bias and discrimination in AI. AI comes with many advantages but is not without its controversies, particularly on issues of bias and discrimination. This is due to the fact that most AI models are trained from large datasets that could mirror some of the bias in the society hence the biased results.
Also Read: Blockchain and AI copyright: A revolution in digital rights management
Bias in AI can stem from various sources such as:
- Bias in training data: If the training data contains inherent biases, the AI system will likely reproduce these biases in its decision-making processes. For instance, in a study, scientists tasked AI with developing a facial recognition system designed to classify individuals into three categories based on their characteristics: doctors, criminals, and homemakers. However, the AI demonstrated bias in its decision-making, frequently labelling women as homemakers, Black men as criminals, and Latino men as janitors, and selecting women of all ethnicities less often as doctors.
- Algorithmic bias: Beyond the data, poorly designed algorithms can amplify existing biases or create new ones. In 2018, Amazon’s AI recruitment algorithm was designed to assess candidates based on their fit for different roles. However, due to the underrepresentation of women in technical positions, the system developed a bias, favouring male applicants as it learned that men were historically preferred for these roles.
- Cognitive bias: Personal experiences and perspectives may lead developers to prioritise certain data over others, potentially skewing the AI’s outputs. For example, favouring data from a particular demographic or geographic region might result in an AI system that does not accurately reflect a global or diverse population.
Strategies for mitigating bias and promoting fairness in AI
In 2024, Malaysia presented a PDP Bill that outlines significant changes in the Personal Data Protection Act (PDPA), including the definition of the terms, added responsibilities for data controllers, and increasing fines for non-compliance. The government regards these changes as great progress in enhancing data protection in the country and as a part of the continuous shift toward stricter privacy rules. This presents a good chance for companies to enhance their protection of data and bring them to par with global standards.
To start, there are various measures that companies can take to make the process ethical and responsible. One of the key strategies is to prioritise transparency, where businesses must provide clear insights into how AI algorithms operate. For instance, developing an explainable AI (XAI) plays a vital role in this process, as it offers techniques to help users understand and trust the decisions made by AI. By incorporating simplified visuals or user-friendly software interfaces, employees can grasp the underlying processes without relying on AI systems blindly.
In addition to transparency, maintaining robust data security is critical. Research shows that 44 per cent of security decision-makers say their companies incorporate security and privacy measures from the outset when developing services, products, or applications.
Moreover, 87 per cent of consumers state that they won’t engage in business with a company if they have concerns about its security practices. This underscores the importance of continuous data monitoring, with dedicated personnel responsible for safeguarding information and preventing leaks.
Also Read: How AI and automation can shape the future of farms
Companies should also ensure that their AI solutions comply with industry regulations and legal standards, as organisations that prioritise ethical AI are more likely to gain consumer confidence and create reliable AI systems. Furthermore, creating the role of human supervision as an AI control factor–where the AI makes suggestions that are then passed on to human experts to make the final decision is another positive since it assures that the systems are running fairly and effectively.
Implementing all these measures is critical in developing AI systems that are not only ethical but also efficient. At OpenMinds, we believe that we have a responsibility to lead by example, and we understand the importance of integrating ethical considerations into any AI development process.
Conclusion
In conclusion, reducing bias and encouraging fairness in the AI system is not only a technical issue but also an ethical issue. The strategies outlined are essential steps towards building trustworthy and ethical AI systems. We believe that these ethical considerations are especially important. As we continue to innovate in the martech industry, we aim to contribute to a future where AI benefits everyone, regardless of their background and identity.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Join us on Instagram, Facebook, X, and LinkedIn to stay connected.
Image credit: Canva Pro.
This article was first published on August 19, 2024.
The post Navigating the AI maze in Malaysia’s martech: Striking a balance between efficiency and ethics appeared first on e27.




