Let’s face it - AI is one of the hottest buzzwords in the recruitment industry at the moment. Outsourcing tedious tasks to a robot might sound exciting, but getting it to the level where it does this successfully is a whole other matter.
Artificial Intelligence requires A LOT of data to work properly. This data is used only for reference. To fully understand what’s going on, AI needs to be presented with algorithms that will, within milliseconds, enable it to effectively mimic the logic of the human brain to make sense of the data. If we translate this to a recruiter language, the machine needs a lot of data to learn how to screen resumes as successfully as a real recruiter and to make intelligence-driven decisions based on its observations. Self-learning in nature, AI algorithms gets smarter with every new analysis.
Unlike recruitment agencies, when it comes to machine learning, all the data is unified into the recruiting ecosystem and is processed internally. Access to the data is locked at the source level, granting entry only to the owner of the data. The machine refreshes the data routinely to ensure accuracy and quality.
Everything I mentioned so far tells a backend, technology part of the story. Perhaps an even more important part of the AI technology is a user-facing one. All the data points we mentioned so far involve personal user data. International data regulation guidelines, one of which is the GDPR (Global Data Protection Regulation), define the ways candidates should be informed about the whereabouts of their personal data (where it is stored, who has access to it, where can this access be revoked and similar). Just like before, transparency remains the strongest currency a recruitment business can own.
Leaving the technical side on one end, the moral and ethical side of switching to Artificial Intelligence and using large quantities of data remains a big concern across the industry. Many questions arise, like for instance:
- What happens to the people involved in the industry and their jobs?
- How do we stay on top of a complex intelligent computer system?
- Have we taken precautions from the potential negative consequences the introduction of AI is bringing?
- How does the AI affect behavior and interaction among real humans?
Large companies currently incorporating AI into their businesses are working toward answering these questions and providing clear guidelines. These guidelines are not created for their own internal purposes, but also to be used cross-industries. The way the big Silicon Valley companies do it is by creating the ethics boards within their ranks which are tasked with monitoring the AI developments. The biggest concern with these boards is their consultative nature and the fact that they do not really have the power to push for change.
Famous case of the AI misuse comes from the world's third largest company, Amazon. As the biggest e-retailer in the world, the company relies heavily on the use of Artificial Intelligence for recruiting and selecting workers for the warehouse operations. Amazon’s experimental AI-powered hiring tool was used to rate the applicants through a 5-star chart, just like the products are rated inside their online store. A year after the system was put into operation, the team working on it started noticing irregularities in the delivered results. They were not gender-neutral and, in general preferred men over women. Amazon’s AI machine taught itself to negatively rate and downgrade resumes that included the word “women’s.” Even though algorithms were fixed later on, the company ultimately decided to shut down the team, as they’d lost hope for the project. Even though Amazon failed to admit this tech was still going to be used by their recruiters to evaluate candidates, the steps they took indicate the direction the industry will be going over the next five years.
Examples above indicate the need of external regulation guidelines as well. Governments aren't sitting on their hands, either, as state-made control legislations are proven to work, too. Recently, for the first time ever, the US government charged Facebook for its feature of targeting customers based on their ethnicity. The Cambridge Analytica leak is another example urging the governments to fill the gaps when it comes to a responsible use of AI technology.