How HR Can Help Solve the Hiring Discrimination Crisis

essidsolutions

“Using dice to assess candidates will not discriminate against any particular groups, yet hiring people using dice is far from fair.” 

This quote from the Cardiff University team at the ACM FAccT Conference 2020 highlights the nuances of hiring discrimination and the challenges of using AI algorithms to address it. Based on the papers presented at the conference, we consider how HR can better leverage AI systems to prevent discrimination, bias, and inequality.

Hiring discrimination has always been a challenge, and the issue is made more complex by the introduction of artificial intelligence (AI). AI algorithms are often “unexplainable” to non-technical business users, and their function can quickly go wrong – perpetuating bias instead of resolving it. And poorly tested algorithms could skew results when hiring scenarios are unconventional – without anyone detecting it.

The Association for Computing Machinery (ACM) holds an annual conference on Fairness, Accountability, and Transparency (FAccT), exploring how AI, machine learning (ML), and data science are linked to issues around bias. One of the major focus areas of this conference in January 2020 was hiring discrimination.

We look at some of the breakthrough papers and research presented at the conference and use their findings to deliver insights for HR on building a minimum-discrimination hiring process.

Learn More: Can Artificial Intelligence Eliminate Bias in Hiring?Opens a new window

How HR Can Manage Hiring Discrimination Introduced by HR Technology

We spoke to teams from Cornell University and Cardiff University, who presented their papers at this conference for their thoughts on the issue of hiring discrimination in the recruitment process. Here’s what we found.

1. Localize automated hiring systems to effectively address concerns around hiring discrimination

A paper Opens a new window titled “What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems” zeroed in on the importance of localizing recruitment softwareOpens a new window and using local recruitment software for the organizations hiring needs.

Authors Javier-Sanchez Mondero, Lina Dencik, and Lilian Edwards from Cardiff University suggest that automated hiring systems developed in the U.S. may not be useful in the varied legal backgrounds of U.K. territories (or elsewhere). As these systems – the paper takes HireVueOpens a new window , PymetricsOpens a new window , and Applied Opens a new window as a sample – become popular worldwide, the risk of bias creeping in is significant.

“Data protection regulation varies noticeably from one country to another, as does equality and anti-discrimination law,” the authors told us in an exclusive conversation.

“As automated hiring systems come to standardize management techniques, potentially on a global scale, there needs to be a thorough interrogation into the value-laden assumptions that underpin their design, including the way they define the issue of bias,” they added. 

2. Evaluate development and testing procedures of your recruitment software vendor

With complex systems like AI-based hiring software, there will always be a level of black-boxing – HR can view and leverage external functionalities, without entirely understanding internal mechanisms. This heightens the risk of hiring discrimination, according to a paperOpens a new window titled “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices.”

Authors Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy from Cornell University argue that the technical workings of anti-bias hiring tools don’t always come under sufficient scrutiny. Data collection and forecasting goals, specifically, are the two areas to watch out for.

“HR professionals should ask vendors to provide information on the validity of a tool: what kinds of data does it use? How was the dataset on which it’s validated built? How well does it perform for a diverse group of applicants?” the authors recommend.

“They should ask about the tool’s limitations – if a tool has only been tested for particular job roles, or a vendor has no information on how a tool performs on candidates with disabilities,” they went on to say.

These are essential questions to ask, especially when anti-discrimination laws are specific and affect employers more than they do vendors of the recruitment solutions.

Learn More: How Employers Are Using AI to Stop Bias in HiringOpens a new window

3. Acknowledge sensitive data attributes to take affirmative action and combat discrimination

Organizations need to be aware of sensitive data attributes that vendors use, such as race and gender, to prevent hiring discrimination proactively. But, some companies are mandated by law to collect this data (such as in healthcare). In contrast, others are prohibited from doing so, finds a paperOpens a new window titled “Awareness in practice: tensions in access to sensitive attribute data for anti-discrimination.”

Should companies ignore sensitive data – as is enabled by blind hiring tactics? Or, can feeding sensitive data attributes into automated hiring systems make them more effective against hiring discrimination? Authors Shazeda Ahmed (University of California), Miranda Bogen, and Aaron Rieke are in favor of the latter approach.

“If companies […] are unable or hesitant to collect or infer sensitive attribute data, then proposed techniques to detect and mitigate bias in machine learning models might never be implemented outside the lab,” the paper states.

“We urge stakeholders, including machine learning practitioners, to actively help chart a path forward that takes both policy goals and technical needs into account,” it adds, highlighting that robust datasets must necessarily back anti-bias technology.

4. Check for alignment with technology ethics before using recruitment software

The field of “technology ethics” is a gray area open to interpretation. Some stakeholders view it as a PR strategy (or ethics washing), some believe it as an alternative to political and social activism, while others limit discussions to a select group of “intellectuals.”

A paperOpens a new window titled “From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy” highlights these trends, explaining why technology must come under the larger, more mainstream field of ethics in philosophy, and not operate independently outside of it.

This may be one more area that you will have to keep up with as HR. The growing umbrella of AI ethics and the philosophy that guides the concept of ethics will determine how HR technology is developed, assessed for company use, and then implemented. While it is unlikely that vendors may be open to defining the “ethics” that guide their algorithm development, it is only when HR initiates these conversations that ethics may be considered.

“We must resist narrow reductivism of moral philosophy as instrumentalized performance […] Far from mandating a self-regulatory scheme [among technology companies], moral philosophy, in fact, facilitates the questioning and reconsideration of any given practice,” writes Elettra Bietti from Harvard Law School in the paper.

Learn More: The Ethics of AI in HR: What Does It Take to Build an AI Ethics Framework?Opens a new window

5. Establish an audit framework for your AI hiring algorithms

Given the rapid rise of automated hiring and the proliferation of AI, it is vital to systemically study the inflowing data, internal processes, output and organizational impact of this technology. This calls for a formal audit framework, one that prioritizes the elimination of hiring discrimination.

In a paperOpens a new window titled “Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing,” authors from Google and the non-profit organization, Partnership on AIOpens a new window , outlined an instrumental framework for HR practitioners and IT professionals.

The framework covers five stages: scoping, mapping, artifact collection, testing, and reflection, delving into the exact scope, role, and results of AI Opens a new window in your organization.

“Each stage yields a set of documents that together form an overall audit report, drawing on an organization’s values or principles to assess the fit of decisions made throughout the process. [It] is intended to close the accountability gap in the development and deployment of large-scale artificial intelligence systems,” the paper states.

The Bottom Line: Transparency Is Central to Curtail Hiring Discrimination

AI technology may seem advanced, but it still has a long way to go in preventing discrimination in the hiring process.

A Glassdoor surveyOpens a new window of employees in the U.S., U.K., France, and Germany found that 50% of employees want their employers to boost diversity and inclusion. Internal research Opens a new window by CV-Library (U.K.) found that 22% of Brits have experienced discrimination during an interview.

Meanwhile, a study Opens a new window in Finland found that Finnish applicants were up to 3.94X times more likely to get an interview than their counterparts from other ethnic groups.

To combat this, HR must have honest and transparent discussions with their technology providers on the possible effects of the cutting-edge technology they claim to provide. Transparency must also extend to candidates so that they know exactly why they were selected/rejected.

“Job candidates would benefit from transparency as well: preparing for an algorithmic assessment is quite challenging unless you know how you are being evaluated,” the authors of “Mitigating Bias in Algorithmic Hiring:  Evaluating Claims and Practices” pointed out.

To enable this, HR needs more than “vague claims of scientific support” from their chosen hiring technology partners.

The authors of “What does it mean to ‘solve’ the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems” shared an important recommendation for HR technology vendors and algorithmic recruitment software providers: “Include clear validation studies, arguably carried out by a third party, not only in terms of algorithmic biases but also to provide any evidence that their assessment tools work.”

Wrapping Up

Hiring discrimination might be a formidable challenge to tackle, but HR can take several foundational steps to address it. Apart from requiring transparency from HR tech and recruitment software vendors, organizations need to create the same culture within. A robust set of values will ensure that HR teams propose and invest in the right technology to guide key HR decisions while accepting the limitations of AI technology as it develops and becomes more mature.

What are your views on hiring discrimination in 2020? Tell us on FacebookOpens a new window , LinkedInOpens a new window , or TwitterOpens a new window . We’d love to know more about your opinions!