Will LinkedIn’s Fairness Toolkit Mark the End of AI Bias?

essidsolutions

This week, LinkedIn launched LinkedIn Fairness Toolkit (LiFT) to address the pressing need for AI fairness in training data sets and algorithms. Will the new toolkit bring an end to the rising AI bias around the world?

Artificial intelligence (AI) has fueled innovation across every industry. From healthcare to manufacturing to financial services recruitment, every sector has seen tremendous progress thanks to AI. Despite these accomplishments, AI systems continue to demonstrate bias and unfairness, affecting humanity and businesses across the world.

AI bias, or machine learning bias, is a process where the algorithm makes prejudiced decisions that can have severe social and economic impacts. Bias in the AI systems stems from training the datasets, where data engineers or data scientists use unconscious cognitive biases starting from data collection and preparation. Biased data used in AI training can easily cause racial, gender, and cultural discrimination. Thus, it is imperative to create fair and bias-free AI systems that can work towards the betterment of society and businesses. 

Last week, LinkedIn unveiledOpens a new window its LinkedIn Fairness Toolkit (LiFT) to identify AI biases in thousands of AI algorithms. LiFT is an open-source project that detects, measures, and mitigates biases in training data sets and algorithms. The new toolkit is based on the research and development efforts that the networking platform has been taking since 2017. 

Tech News: AIOps To Help Enterprise ITOps Boost Performance With New Zenoss and HCL Partnership

The Rise of Unfair AI and Machine Learning Algorithms

In 2015, Google Photos identified two black men as gorillas, while in 2019, researchers foundOpens a new window that a major healthcare algorithm in the U.S. displayed a racial bias for people of color. Another case for AI bias is Facebook’s ad algorithm. In 2019, the U.S. Department of Housing and Urban Development filed a lawsuit against Facebook for violating the Fair Housing Act by allowing targeted ads by race, gender, and religion on its platform. Now, the social media platform has employed a new equity team that will study the AI algorithms for bias.

The surge in racism inequality due to AI-based surveillance tools led several prominent tech companies to shut down their facial recognition software products. In June 2020, IBM led the pack to bring an end to AI biases, followed by Facebook and Amazon. However, the industry is still grappling with the growing AI biases.

In 2018, Joy Buolamwini and Timnit Gebru released a paper focusing on fairness, The Gender Shades ProjectOpens a new window . The paper audited the accuracy of AI systems offered by IBM, Microsoft, and Face++, among others. The analysis revealed that Face++ misgendered 95.9% of women’s faces, Microsoft misgendered 93.6% of people with a darker complexion, and IBM’s AI system created a 34.4% error difference between men with lighter complexion and women with a darker complexion. This study made several companies rethink their AI strategies.

Igor Perisic, chief data officer at LinkedIn, saysOpens a new window , “AI fairness work is subject to the blind spots and biases of researchers and engineers themselves, and real-world datasets often reflect real-world disparities and markedly unfair social realities. We should build upon AI work and research and attempt to implement fair and ethical AI systems in the real world.” 

Despite several toolkits like IBM Fairness 360 Toolkit, Google Explainable AI, and FAT-Forensics available in the market, there is a considerable gap in tackling large-scale challenges. Moreover, the existing toolkits are restricted to specific cloud vendors, which prompted LinkedIn to open source its toolkit for AI practitioners, developers, and companies worldwide.

Tech News: How 5G and AI can Make Smart Factories Smarter: New IBM and Verizon Alliance

How Does LiFT Work?

LiFT is a Scala/Spark library designed for ad hoc fairness analysis, exploratory analysis, and production workflows. The library has bias measurement components that can evaluate biases in the entire lifecycle of machine learning (ML) workflows. Designed to work on large-scale ML algorithms, the library can be integrated into any stage of the ML algorithm. While measuring fairness, LiFT leverages Apache Spark to process large datasets over numerous nodes for scalability.

Currently, LinkedIn has been using LiFT to compute the fairness metrics of training datasets on its platforms, such as the Job Search model and the anti-harassment classification model. The LiFT library is now available on GitHub, a step to encourage companies to measure and mitigate AI bias.

AI has penetrated every corner of life, and now it is time to ensure that powerful AI tools solve complex problems efficiently rather than create unfair decisions. AI practitioners, data engineers, and data scientists will need to ensure the training data sets are fair and equal for diverse cultural groups, gender, people of color, and more. 

Do you think LinkedIn’s Fairness Toolkit will help end AI bias? Comment below or let us know on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d love to hear from you!