Artificial Intelligence (AI)

An Algorithm Is Secretly Biased as Well: Case of Recruitment Automation Tools

Jun 02, 2020

Humans have been recruiting since the dawn of time. In the beginning, it was the most capable hunters, then builders, painters after them, musicians, all the way to the widest variety of roles we are hiring for today. Even though it’s been thousands of years now, the basic premise never changed: get the most qualified candidates at the best possible rate. These values apply to AI recruitment as well.

The ultimate goal with hiring probably remains the same and it goes alongside these lines:

  • Find successful and diverse employees
  • Reduce time to hire, multifold if possible
  • Bring down the cost per hire
  • Maximize the quality of every hire
  • Shoot for better workplace diversity

One of the most persistent issues that survived aeons of refinements of talent acquisition processes is the bias. There is an ever-expanding number of biases that plague the modern jobseeking sector. Bias represents the ability of our brains lo look past the facts and logic when creating assumptions and reach for the intuitional decisions which are often colored by our cultural surroundings and personal beliefs. This bias can be deliberate or completely unconscious. In either case, it risks jeopardizing the candidate selection process.

To make things a little bit worse, we managed to transmit our bias challenges to the computers as well. For almost a couple of decades now, we have used digital technology to manage our hiring decisions. As of recently, we are turning to new predictive hiring tools to do the job for us. What could have been the end of discrimination in hiring is turning out to be potentially as problematic as the old-school selection. The rapid growth of AI-enabled tools and the inability to iterate fast enough, combined with sluggish regulation surrounding the implementation is taking the toll.

What is the situation today? How do these tools fair today? How talent acquisition technology vendors frame their offerings? Where does the intuition stop and the logic start?

Regardless of whether you use the AI for recruitment or not, the process itself begins long before someone submits the application. Advertising allows us to spread the word quickly across the employment spectrum and target the potential candidates by the channel. Chanel acquisition, however can, and usually is, dictating the quality and the recruiters can assign value to this information.

When talking about AI recruitment, we start with the most human thing there is. To work, even AI tools need input. The machine needs to be taught how to think and draw conclusions from strings of data. In case of incomplete data, or missing input, there can be no predicting. At its best, artificial intelligence will help you speed the processes up and reach talent that would otherwise be potentially overlooked.

We shape the models of AI behavior based on the real-life cases that serve as good examples. This means that the model is itself inaccurate, unrepresentative, or possibly biased out of the gate, at least in the beginning. Every algorithm is designed to evolve over time and learn from its mistakes. As the data sets get more complex and analysis patterns more frequent, an opportunity arises for something to be learned. In most cases, real people weigh in on the accuracy.

A good example of the techniques is one where AI aims to learn hiring based on previous hires it made, ones which were labeled as good. While the basis of this idea is sound, the technique leaves room for producing the patterns of inequity at all stages of the hiring process by enabling the preferred profile of applicants even in situations when the AI was specifically told to ignore inputs for the race, gender, age, and other submitted information. Since we’ve already established that predictive recruitment tools can reflect institutional and systemic biases, avoiding problematic inputs cannot be a solution.

Some of the AI biases aren’t even linked to the technology side of the service. Embarking onto a digital recruitment process requires the candidates with limited internet access or computer skills to go the extra mile and comply. The status of people with disabilities remains a big question. Recruitment experts are still debating whether the online job platforms empower the further exclusion of certain groups, or are they actually helping mitigate it.

As good as the tools are becoming, they are not yet trusted to make the final decision. This privilege remains reserved for the human recruiter that will be seeing the candidates and making the decision. The AI is, however, often trained to deliver the bad news. Hiring tools were not designed to deliver mostly affirmative hiring decisions. In fact, if we take into consideration that even with new technologies the number and quality of applicants remain the same, we quickly come to the conclusion that the AI tools are often automated for rejections. Ultimately, it is the human eyes that will be seeing the candidates and making the decision.

The global recruitment software market size was estimated at 1.7billion US dollars back in 2017. This number is projected to reach a 3.1billion mark by the end of 2025. As it is usually the case, the distribution of technology is highest among the international tech giants, such as IBM, Amazon, Oracle, and others. However, with the expansion of the startup concept, we are witnessing a lot of newly founded companies joining the race, often with more competent offerings.

We designed Pandy AI to transfer all the offline recruitment processes our company was performing for well over 20 years into a digital AI recruitment platform. It was designed to completely bypass the two things hiring depended on up until that moment:

  • Intermediary, an individual establishing and maintaining the connection between the client and a bank.
  • A multilayered approach of hiring, an intertwined process of numerous meetings, phone calls, email exchanges, document shares, and others.

Both these things were fertile ground for occurrence of bias-affected decision making. This is why they need to be streamlined through technology and instances of it removed altogether. Inside Pandy AI, candidates are ranked simply by their success ratio, a value easily calculated through the numbers every member submits upon entry. All the information irrelevant in the overall context of banking was omitted to remove confusion and save time.

The final frontier in fighting the bias remains with the institutions. Companies should aim for diversity not because it’s the right way forward, but also because there is a business scaling opportunity behind it. A more dynamic, engaged team, coming from a diverse set of backgrounds and perspectives will be more inspired and bring better results.