AI Bias Challenges in HR and 6 Ways Companies Can Address Them

essidsolutions

Employers in different fields are turning to AI (artificial intelligence) and ML (machine learning) algorithms to harness the technology into their decision-making processes.

These algorithms deliver sophisticated and pervasive tools that leverage massive data to carry out tasks done only by humans until very recently. Integration of AI/ML promises unparalleled efficiencies through statistical rigor.

The technology applies to HR as much as anything else. It promises to eliminate opinions made from the subjective bias of recruiters and simplify the complexities of decision-making in general.

But there is a gap between what AI promised (seamlessness, sophistication) of its role in HR and what we are witnessing today. The machine algorithms fall short of our expectations. HR today is facing disruption and transformational challenges with the integration of AI in its functions.

See More: 3 Pillars of Trustworthy AI in the Workplace

The Problem Is Much Deeper

Joy Buolamwini, a researcher at the MIT Media Lab, explains some biases of the real world that have crept into AI, especially in facial recognition technology. Because, at the end of the day, ML/AI is run by humans, and the system will only run as smartly as it has been trained to.

Buolamwini has been advocating for the emerging field of “algorithmic accountability.” Her TED Talk on algorithmic bias has been viewed and well-received worldwide. She is also the founder of the Algorithmic Justice League, which raises awareness of the issue.

After a few years of joining MIT Media Lab, she witnessed a rather shocking bias. Only when she had a white mask on did the software recognize a face in front of the screen. The fact that this facial software had percolated into the mainstream and was being adopted widely was the biggest problem.

She’s now leading the fight against ML algorithmic bias, an issue she has termed “coded gaze.”

AI Will Only Be as Good as Its Human Trainers

There’s no denying the fact that leveraging AI into the HR domain has enhanced the capabilities of HR in a collaborative environment and not made HR irrelevant, as was predicted by many. And integration of AI and HR is improvingOpens a new window  each day by leaps and bounds.

But the system comes with its own bias; there’s no fairness approach in place. The AI integration sometimes triggers socio-psychological concerns amongst candidates, leading to their true potential being left untapped.

In 2018, everyone went into a frenzy about Amazon’s secret AI recruiting tool. But the company rolled it back because their ML specialists recognized a huge bias issue in the system: their AI was penalizing women in the recruitment process.

Researchers at IBM and Microsoft developed and tested a face-analysis service to identify the gender of humans by simply looking at photos. The results concluded that the algorithms were nearly perfect at identifying men with lighter skin but frequently made errors while analyzing images of dark-skinned women.

In another example, a study by the Georgetown Law School estimates that 117 million American adults are a part of the facial recognition datasets used by the majority of law enforcement agencies, but most of them are white. The black population is disproportionately represented in these datasets.

Challenges HR Is Facing With the Implementation of AI in its Functions

●Performance metrics: The AI-based performance appraisal score is being done away with by HR departments at many companies because it gives a skewed view of how good a particular candidate might be at their job. These metrics have validity and bias issues because they are based upon historical data that could be incomplete or comprise faulty datasets. Moreover, it gets difficult to gauge an individual’s performance based on these metrics because when you’re working in a team, it’s difficult to measure the exact contribution of an individual owing to interdependencies.

●Lack of “Big Data”: Unlike any other field, HR lacks massive structured data sets to do a professional historical check on employees. And this only gets difficult as the majority of the companies have employees in thousands, so analysis tools don’t come to the rescue in that case.

●AI doesn’t give a concrete picture: It’s the innate nature of humans to hide their actual disposition when they are under observation. In the age of AI, humans have learned the limitations of AI, which fail to capture their true character, capabilities, and ethical nature.

● AI lacks fairness: It completely ignores the fairness approach consisting of procedural and distributive justice that impacts the decisions that involve the candidates. The concern is yet to be solved.

Mitigating AI Bias Through Deeper Systematic Solutions

The AI technology in HR can be an opportunity well seized, but that can only happen if the existing algorithmic bias is solved with more inclusivity, justice, and fairness in the process of machine training.

Below are some strategies that organizations can deploy to mitigate and prevent AI bias in HR:

● Implementing AI in HR only where it is needed: There are some aspects of the hiring process that will require an HR leader. AI is sometimes not capable of dealing with and looking at problems from multiple perspectives — problems that might require a manager’s unique point of view or looking into the work of employees.

â—‹AI might not find the ideal candidate: AI lacks qualitative qualities like measuring an employee’s positioning and direction concerning the company’s goals and culture. It can also reject psychological and emotional traits — qualities that make employees who they are at a deeper level and play a major role in their work behavior.

â—‹Incompetent to make connections: The algorithm may or may not make connections that could signal that a person’s history makes them the right fit. Sometimes it might outright reject applicants who wouldn’t match the hiring criteria, but that person could be capable of making a strong contribution to the company.

●Having a diverse team working on AI and algorithms: To rectify the very root cause of this bias, data scientists believe that having a diverse AI algorithm team is critical. A diverse team wouldn’t only mean including women but people of different ages, colors, backgrounds, abilities, sexual orientations, etc.

A more inclusive ML training will ensure that it is created by people who will eventually use it, rather than by a single group of people who set the technological algorithms and standards.

Incorporating DEI principles (diversity, equity, and inclusion) into an organization’s AI algorithm policies and practices can be one of the many ways to deal with the issue.

●Promoting a culture of AI responsibility and ethics: AI may or may not catch the bias of their trainer, who might have injected their bias into the training datasets very unknowingly. Such biases can cause a point of friction owing to the lack of ethical responsibility since they might put minority groups at a disadvantage.

Organizations need to enable a team culture where teammates understand the nuances of bias and discriminatory issues and reflect a sense of personal responsibility toward rectifying those biases.

●Developing responsible datasets: The HR department of organizations should run regular data checks to ensure that existing and new data is free of any systematic bias, is inclusive and will benefit every human it is developed for.

●Giving algorithm development team ethical framework: Rectifying bias through ethical framework handed down to algorithm development team, free of any variables that could trigger the algorithmic bias, is one of the solutions.

●Corporate and industry governance: Mitigating bias by a shared responsibility to tackle AI bias in the industry will be the preeminent step to solving the problem. Holding seniors with high authority in the industry accountable, questioning power dynamics and structures that might be unfavorable to some, deprived of diversity, should be some actions to be taken at the industry level.

See More: Data Driven HR: The Key To Retaining Talent and Enhancing Employee Experience

AI/ML in HR Has a Promising Future, But Only If…

AI is received as an augmenting force providing insightful analytics for efficient decision-making and processes. It shouldn’t be viewed as a hard-core problem solver, rather as a helping hand in the HR processes.

As technology is seeping into HR, it will take some time for the challenges we have discussed above to be solved. But these problems confirm the importance of humans in the HR process in evaluating the process uniquely.

The future overall looks promising, and though some of these decisions will be best solved with the help of AI tools, others will need mindful considerations before these algorithms can be designed to mitigate discriminatory results.

If you have implemented AI, what bias challenges are you facing? And how are you overcoming them? Let us know on LinkedInOpens a new window , FacebookOpens a new window , and TwitterOpens a new window .