How AI can be Trained Against Bias


With facial recognition powered by artificial intelligence (AI) becoming more commonplace, from confirming a bank customer’s identity when accessing a bank account to providing employee access to a secure location, it has come under fire for what critics call “natural” biases.

We believe that the failings of a small number of digital identity platforms are leading to misconceptions about the AI technology behind digital identity platforms as a whole. 

AI operates as a blank canvas and learns based on what it observes. Contrary to what science fiction may have us believe, these systems are not inherently nor ultimately malicious. They rely on the data sets they are trained on and are the outcome of those experiences. Faulty data sets may result in algorithms replicating or promoting undisclosed – or unintentional –  human biases.

However, one of the attractive characteristics of machine learning (ML) is that it lets AI systems learn, adjust, and improve based on experience. So we can empower our AI systems with expanding and diverse data sets to make even better decisions. We can teach AI-based digital identity platforms how to uncover biases and help find solutions. 

In “Notes from the AI frontier: Tackling bias in AI (and in humans), the McKinsey Global InstituteOpens a new window acknowledged that “AI can reduce humans’ subjective interpretation of data because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used. In addition, some evidence shows that algorithms can improve decision making, causing it to become fairer in the process.”

Besides artificial intelligence’s inherent learning abilities, AI decision-making can be discovered, evaluated, and investigated – unlike human decision-making and biases, which may be difficult to uncover or source. This ability allows us to continually improve our AI-based systems as we work toward freeing them of any inherent biases.

See More: How Intelligent Automation and AI Address Key Problems Facing the SOC

Leverage AI’s Ability To Adapt and Learn

Both machines and humans are shaped by what they experience. Bias becomes a problem in AI systems built with limited data sets or those that don’t exhibit a diversity of demographics. For example, if a team unwittingly trains its AI-based system to recognize a single demographic as bank customers, those outside of that training may be denied access to their accounts while trying to gain access to a resource via their smartphone. While this may be a simple example, we must ensure these are the kinds of gaps that we eliminate from our AI platforms. 

Digital authentication technology can only work when AI is fed large and diverse identities to recognize the range in humans’ biometric features effectively. Unbiased recognition starts with the way technology is trained and with enabling the technology to evaluate different genders and ethnicities upon its conception. AI and ML boast robust analytic power to absorb the substantial data sets and filter through all the different characteristics that make humans unique – including ethnicities – to make unbiased decisions.

How can we help ensure there isn’t bias in our AI platforms? To begin with, we need to (1) ensure that we’re feeding it a large enough data set and (2) train our AI platforms with even distributions so that the data sets are not heavily weighted toward one group or another. 

To mitigate bias, we must look at both the data sets’ size and scope. An extensive data set without demographic distribution or a diverse data set without a large enough representation can easily tilt toward bias.

Take Ownership of Role in Eliminating Bias

But there’s also much more that we can do. As stewards of AI-enabled technology, we must commit to ensuring that algorithm bias doesn’t creep into our digital identity platforms. We need to acknowledge the problem to develop practices that eliminate bias openly. We may not always be able to guarantee that our data sets are large or diverse enough, but we can perform ongoing analysis of the data sets we are feeding into our AI platform, as well as the results that platform is generating to ensure bias is not part of the equation. 

Many organizations, such as the Alan Turing Institute’s Fairness, Transparency, Privacy group and Partnership on AI (PAI), are already involved in eliminating bias. They share a vision for AI that fosters diversity and reflects society. 

Another organization worth noting is AI4ALLOpens a new window , a U.S.-based non-profit dedicated to increasing diversity and inclusion in AI education, research, development, and policy. It looks to build “better AI” through summer programs for college students and alumni outreach. Its vision for AI fosters diversity and is more human-centered: “When people of all identities and backgrounds work together to build AI systems, the results better reflect society at large. Diverse perspectives yield more innovative, human-centered, and ethical products and systems.”

Understanding how to mitigate AI bias will become more important as the technology is used for more and more decision-making – not only in business but across the whole human ecosystem.

At the top of our list to eliminate bias from AI is to employ a diverse group of engineers and developers, which may organically help with data sets not favoring one group over another. When surrounded by teams that reflect different genders, ethnicities, and backgrounds, we are more likely to spot and anticipate whether the data is representative of the full human spectrum. Not surprisingly, several studies have found that the demographics of the engineers play a role in creating biased predictions.

In the end, diverse perspectives yield more innovative, human-centered, and ethical products and systems. Digital identity is the future; it’s where identity verification is heading. We must understand our role in eliminating bias from AI and prioritize collecting large and diverse data sets that reflect the full humanity of our world. Ensuring these AI-based platforms are bias-free will help secure more trust in biometrics authentication as an effective and even desired means of verifying identity. 

How daily tasks can be shifted to a digital authentication once bias is removed from key algorithms? Share with us on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d love to know!