Unethical AI Applications in Recruitment

by admin on June 1, 2018 in Artificial Intelligence (AI), Technology

 

As artificial intelligence (AI) creates disruptive contributions across a number of industries, talent recruitment becomes only the latest area where AI adds its expertise. But as in every early phase of life changing technology, (think Alfred Nobel and dynamite) there is the possibility of abuse before the rules of good conduct are accepted as standard practice. Getting ahead in your industry through using AI as a recruitment tool has benefits, and potential pitfalls, as described by Forbes contributor Tomas Chamorro-Premuzic

  1. Cyber-snooping: Traditional recruitment tools were invented to compensate for our historical inability to collect sufficient data on candidates’ history and behaviors to predict their future performance…For instance, machine-learning algorithms can be used to predict candidates’ intelligence and personality – including their dark side – from their Facebook profiles. Likewise, AI has been effectively used to translate our Twitter footprint into a fairly comprehensive personality profile, because our choice of words reflect who we are, including our talent and career potential.
  2. Withholding feedback: Historically, recruitment tools did not provide much feedback or information to candidates on their profile or how their results connect to the outcome of the recruitment decision, except when the outcome was positive. While this is disappointing – why should we deprive candidates from gaining useful career feedback and understanding themselves better? – it is also true that the high-touch nature of traditional recruitment tools makes giving feedback more time-consuming and expensive
  3. Predicting biased outcomes: There is no doubt that the biggest potential advantage of AI recruitment tools is to minimize our reliance on human bias and intuition (which is rather biased). In an age of ubiquitous data and cheap prediction, there should be no excuses for playing it by ear, so when hiring managers or recruiters note that they know talent when they see it, we should demand some evidence. Unfortunately, AI can also be deployed to predict the wrong outcomes. For example, when machine-learning algorithms mine digital interview data to predict whether recruiters will want to hire the candidate, they simply perpetuate human biases. In other words, training AI models to emulate human preferences or biased decision-making, will only replicate the shortcomings of our own mind.
  4. Black-box selection: A final ethical consideration regarding AI recruitment tools is the degree to which they explain why a candidate has potential for a given job (or not). It is not enough to predict future job performance – recruitment tools should also help us understand the basis of such prediction, which means having a rationale for selection or rejecting a candidate. For example, when voice-scraping algorithms identify a connection between certain physical properties of speech and job performance, it is important to understand what the basis for such a connection is.

Although AI will improve matching people to jobs, it is important to monitor and shape the ethical implications of such technology while it is still in its infancy and easy to change.