Twitter’s Algorithm Failure Shows Big Tech’s Ongoing Struggle With AI Bias

essidsolutions

Twitter’s photo cropping algorithm exhibited AI bias this week, drawing severe criticism from its users. The microblogging tech giant revealed it had tested the algorithm for AI bias and conceded there is a need for continuous improvement. Let’s take a look at the measures companies are taking to address this persistent issue.

Artificial intelligence has transformed industries on a large scale. AI software revenue is expected Opens a new window to hit $126.0 billion by 2025 and the prospect of using AI for transforming businesses has never been more promising. However, despite the ever-forward march of AI, issues of gender and racial bias persist.

A DataRobot survey revealsOpens a new window compromised brand reputation and loss of customer trust are the most concerning repercussions of AI bias. Lack of data visibility to train AI and creating trustworthy algorithms are some of the common challenges companies face in eliminating AI bias. Even though 83% of AI professionalsOpens a new window have established AI guidelines to combat AI bias, the tech world still grapples with AI bias issues. Recently, Zoom and Twitter faced sharp criticism for algorithmic bias.  

On September 19, 2020, Ph.D student, Colin Madland, tweeted screenshots of a Zoom meeting on Twitter, where he expressed concern about Zoom’s virtual background improperly cropping out the face of a faculty member with a dark complexion. Surprisingly, Madland noticed that Twitter also cropped the faculty member’s face and showed only his image, clearly indicating racial bias by the AI algorithm. Twitter’s photo preview tool makes a cropped version of the image, which only displays people with lighter skin unless the user clicks on the image and expands it.

A faculty member has been asking how to stop Zoom from removing his head when he uses a virtual background. We suggested the usual plain background, good lighting etc, but it didn’t work. I was in a meeting with him today when I realized why it was happening.

— Colin Madland (@colinmadland) September 19, 2020Opens a new window


Tony Arcieri, the cofounder of iqlusion and Polyglot programmer, tried experimenting with the photo cropping algorithm with Mitch McConnell and Barack Obama photos and had a similar experience.

Trying a horrible experiment…

Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkiaOpens a new window

— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020Opens a new window

Other Twitter users tested the photo-cropping algorithm with news reportersOpens a new window , cartoon charactersOpens a new window , and dogsOpens a new window and reported the same issue. The tweets caught the attention of developers and data scientists who worked on the algorithm.

Tech News: Amazon Wants Multiple Voice Assistants to Coexist on the Same Device

Twitter’s chief design officer, Dantley Davis, conducted various experiments with the image and responded that the company is still investigating the neural network.

Opens a new window

Joy Buolamwini, the founder of Algorithmic Justice League, responded to Zehan Wang’s, head of Cortex Applied Research at Twitter, tweet with a barrage of questions.

Opens a new window

Meanwhile, Twitter spokesperson Liz Kelly tweeted that Twitter had tested for AI bias before shipping the model. The company will open-source its work for developers and engineers to review and replicate.

thanks to everyone who raised this. we tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. we’ll open source our work so others can review and replicate.

— liz kelley (@lizkelley) September 20, 2020Opens a new window

Tech News: Will LinkedIn’s Fairness Toolkit Mark the End of AI Bias?

How Can Companies Avoid AI Bias on Their Platforms?

Too often, racism manifests in AI models from the training datasets. And Twitter is not the first company to be accused of it. Several tech companies, such as Facebook, Google, IBM, have faced sharp criticism for embedding bias in algorithms. To combat this hot-button issue, the tech industry has rolled out various fairness toolkits such as IBM Fairness 360 Toolkit, Google Explainable AI, FAT-Forensics, and LinkedIn Fairness Toolkit. However, despite multiple toolkits, there hasn’t been a concrete solution to AI bias. 

So what can companies do to address this issue? 

Firstly, companies need to broaden their horizon and welcome experts from different disciplines such as law, humanities, political science, behavioral science, neuroscience, and human psychology to train AI algorithms. A diverse team might help decode human behavior complexities and avoid unconscious bias in training the datasets. Additionally, companies can invest in sophisticated white box AI systems to gain better visibility of the data being used in training the AI systems. 

Chris DeBrusk, managing director and head of transformation at BNY Mellon and partner at Oliver Wayman, saysOpens a new window , “To address potential machine-learning bias, the first step is to honestly and openly question what preconceptions could currently exist in an organization’s processes and actively hunt for how those biases might manifest themselves in data. Since this can be a delicate issue, many organizations bring in outside experts to challenge their past and current practices. Once potential biases are identified, companies can block them by eliminating problematic data or removing specific components of the input data set.”

AI systems have infiltrated various sectors, including hiring, healthcare, insurance, and law enforcement. It is time  the tech industry takes concrete steps to fix the perception of bias that could affect millions of people and build trust.

What are your thoughts on the growing AI bias in the tech industry? What other measures can companies implement to combat this bias? Comment below or let us know on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d love to hear from you!