AI, the Future of Recruitment, Hasn’t Solved the Bias Problem

essidsolutions

Any plugged-in HR professional is well aware that artificial intelligence is revolutionizing the industry.

In fact, six weeks ago I wrote a post on how robotic press automationOpens a new window , largely powered by AI, can help upgrade dated and disconnected management systems to overcome many of the inefficiencies dragging down HR departments – from automating and compressing business procedures to automating the onboarding experience for more efficiency.

And while AI is indeed boosting myriad HR processes, risks should be assessed before blindly incorporating these new technologies – especially regarding hiring.

AI is streamlining the hiring process

Hiring – especially at major companies – can be a long, tedious and arduous operation. Most HR executives welcome new technologies that promise to make the entire process easier.

And why wouldn’t they?

With 70% of HR managers agreeing that data-driven processes would improve their recruiters’ effectiveness, AI is perfectly positioned to fill this need.

AI tools can contribute to nearly every aspect of recruitmentOpens a new window , such as:

  • Improving the quality of hire by reaching a larger pool of high-quality candidates: New technologies can write better, more targeted job descriptions and then identify the candidates not only best-suited for a specific company or position, but also unlikely to churn out.
  • Enhancing the job candidate and onboarding experience by automating parts of the process and providing answers to any questions.
  • Providing cost- and time-saving benefits for HR departments by automating high-volume, time consuming and repetitive tasks to reduce the administrative load and eliminate many procedural hassles.

But despite the many ways that these emerging AI technologies can help with recruitment efficiency, many experts feel that they haven’t quite cracked one key issue: recruitment effectiveness, related particularly to bullet point one above, quality of hire. Ultimately, although many of the AI-based services say they can find the best people for a job, solid proof. is lacking

And this is a problem. Now that HR data is easier to access, gather and examine, quality of hire is one of the most highly sought KPIs in the sector.

While there may be significant amounts of data created during each recruitment stage that AI systems can use, it isn’t enough. HR departments need better, more conclusive data and intelligence that ensures these AI tools are screening for the most relevant indicators.

This has led to an influx of recruitment agency start-ups programming their AI systems to analyze new types of datasets that they guarantee will identify the most relevant potential hires, in large part by assessing people’s personalities in more detail than ever before.

This new breed of AI-driven service is sifting through candidate data to cross-reference previous experience, knowledge and skills to specific job requirements.

Automated systems are actually digging deeper than most candidates even realizeOpens a new window , going through their social media profiles and history, and even quantifying micro-expressions, vocal inflections and the word clusters used during a video interview.

Although fewer than 50% of Americans are aware that such algorithmic hiring systems even exist, major brands like Unilever and Hilton have been using them.

The start-ups creating the systems say that all these intimate data points help their robots to match jobs with candidates that will be happier, more productive and less likely to turn over. They argue, in fact, that this hyper analysis can eliminate bias from the recruitment process.

If all this were true, it would be a pretty big deal.

Imagine if a company could be say with certainty that it bears no hiring bias – especially useful considering the sensitivity to issues concerning, for example, the gender pay gap or diversity in the workplace.

In reality, though, the technology isn’t quite there – yet.

AI is not immune to bias

The companies programming these AI-driven tools argue that bias can be reduced and even eliminated through an algorithm assessment platform that can be evaluated, edited and reprogrammed if necessary. They are also quick to point out that the risk of unconscious bias – e.g. human recruiters – is not an issue for a robot.

By using data and predictive analytics, the systems purportedly examine only specific criteria, allowing recruiters to make decisions purely based on how likely a candidate is to succeed in a particular role – a decision based on algorithms that calculate a matching score for each applicant.

But can’t innocuous criteria used by the machine to filter candidates – for example, playing lacrosse or tennis, or having a musical background – happens to favor a particular race or gender?

As with all code, the AI can only be as unbiased as the people who programmed it. And considering that AI uses patterns in previous behavior to learn, any human bias that exists in a recruiting process can be picked up by AI.

Still worse, if that algorithmic bias isn’t noticed by HR, the AI system could do more than simply reinforce biases; it could make them worse by repeatedly optimizing for them.

Recruitment aside, similar AI-based systems have shown worrisome levels of bias, discriminating against the poorOpens a new window in custodial cases or using race to predict the probability of criminal behavior.

In many ways, a simple fault in an algorithm’s programming could do far more damage than a biased human recruiter.

HR must realize that AI is not yet ready to create its own criteria to identify the best applicants, and ensure not to take AI-produced results at face value, and assume they can’t be biased because, after all, they came from a computer.