Posted on

Ethical implications of using AI in hiring

Need to make a rational decision today? Chances are your subtle (or not so subtle) bias, will hinder the objectivity. 

Cognitive bias is defined as a systematic pattern of deviation from norm or rationality in judgment.

Wikipedia's complete (as of 2016) list of cognitive biases, arranged and designed by John Manoogian III (jm3). Categories and descriptions originally by Buster Benson.

Image Credit: Wikipedia’s complete (as of 2016) list of cognitive biases, arranged and designed by John Manoogian III (jm3).

There are over 180 such biases documented and, of course, these both knowingly or unknowingly find their way into our regular life decision, from hiring your next team member to enrolling yourself in rigorous physical activity.

Human biases

Decisions made by cognitive systems are based on prior knowledge and experience and their extrapolation to the present and future. Us humans are no different. Several conscious and subconscious biases plague every decision we make.

Either the existence of prior knowledge or its absence can create various forms of bias in a decision-making process through informed or uninformed assumptions, respectively. Sometimes, the presence of these biases is not of grave consequences.

However, in several cases, especially when making decisions that will affect the lives of several individuals, these biases need to be examined and, if necessary, rooted out.

Also Read: The real world bias issues of AI

Recruitment is an area of decision-making where biases are rampant and affect a significant fraction of society. While there has been considerable social and legal innuendoes and aspersions building pressure to make this process fair and equitable, we are far from any utopic realm of unbiased recruitment.

The problem is deep and complex since the biases are not only deep-seated in the decision-making process and individuals or entities involved but also historically nuanced based on the fragment of the society under consideration. 

The voices that have advocated automated decision-making through an algorithmic support system have used the argument of removing humans from the decision-making process precisely so that these conscious or subconscious biases do not come into play.

However, often this does not lead to the complete removal of biases from the decision-making process, as is evident from the flaws of criminal risk assessment algorithms determining parole for the convicted based on predicted future threats to the society.

Algorithmic biases

“Data-driven algorithmic inference” can be generally described as human logic augmented by learning from data; automated in a manner that machines can be used for automating the process, thus increasing process efficiency or reducing the degree of human oversight necessary or both. 

These algorithmic inferences can help when the detailed working of a system or a use case is partially or completely unknown. 

For instance, when a loan officer decides whether to approve a loan, they usually go by:

  • A set of rules (logic) set out by the bank, 
  • Their own interpretation of the rules on a case-to-case basis, and 
  • Their former experience with issuing loans. 

To replace the loan officer with an algorithm, the latter needs to learn:

  • The predefined rules
  • How the rules can vary on a case-to-case basis
  • What the officer knows from their prior professional “experience” of issuing loans.

While the first necessity is predefined, the other two can be learned from historical data. This is where algorithmic biases can creep in from the data; such as:

  • Any recorded data will have encoded any historical biases that the human decision-makers had. For example, if the loan officers were historically biased against the minorities, the algorithm will imbibe the biases.

Also Read: These Artificial Intelligence startups are proving to be industry game-changers

  • One can claim that these biases can be removed by not using race or gender as a variable in the decision-making process, thus masking the racial or gender identity of the applicant. However, it is known that racial identity is correlated with other variables like geographic location, educational qualification, generic age when a specific action is performed, credit histories etc. These correlations will indirectly bias the algorithms when this data is used for learning. 
  • The data itself might not contain a significant pool of lesser-represented classes; hence, the decision-making for those classes can be seriously flawed. This is the case with many clinical trials where it is challenging to have a representative sample of the minorities; hence, essential areas such as drug development and disease analysis suffer from this.

Similar biases can creep in when algorithms are used for hiring, either from the flawed design of the algorithm or from the unmonitored use of data.

The problem is heightened because many such decision-making algorithms are very complex and black boxes. Because either they cannot be explained, or the institutions building these algorithms keep the inner working extremely secret and closed from any audit.

Can cognitive bias be completely avoided?

Unlikely. Our minds seek efficiency. This translates into us conducting our daily decision-making on automatic processing. 

But we can always, train ourselves to recognize the situations in which our biases are likely to trigger and take steps to uncover and correct them.

Explainable artificial intelligence (AI)

According to the  EU AI Act, “AI systems used in employment, notably for the recruitment and selection of persons … should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons.”

To make the decision-making process transparent and open to scrutiny, algorithms designed to make decisions must be made explainable or interpretable. This is the goal of explainable AI, where decision-making algorithms that learn from data are explained post hoc to understand better why the algorithms make certain decisions. 

The reasons behind algorithmic decision-making can be analyzed by humans and verified for being unbiased and making the right decisions for the right reasons. More and more methods for explanations of AI algorithms are being built now that the community realized the importance of removing algorithmic biases from automated decision-making using AI. 

However, awareness of the necessity of explainability is still lacking amongst the end-users of these algorithms often leading to requirements for explainability not being imposed.

The world has to turn around and understand that if we make humans accountable for their decisions, we should also make algorithms accountable for automated decisions. AI is being increasingly used to determine who fits into a certain job description and several companies are trying to make this process unbiased. 

Nevertheless, a lot more remains to be implemented and a paradigm shift is necessary amongst the employers to understand that algorithmic biases can affect the performance of their own workforce and more attention needs to be paid before recruitment can reap the benefits of AI systems.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic

Join our e27 Telegram groupFB community, or like the e27 Facebook page

Image credit: Canva Pro

The post Ethical implications of using AI in hiring appeared first on e27.