IBM has announced the launch of “Trust and Transparency,†a new tool to detect bias by artificial intelligenceOpens a new window systems as fears grow that the new technology is making unaccountable decisions that discriminate against certain social groups.
AI technology is termed a “black box†of inputs and outputs, as even AI developers do not fully understand how all decisions are reached. Data is fed into the system and machine learning algorithms use mathematics to work out the optimal result. Without knowing how that decision was made, there is a creeping fear that it could produce biased results.
IBM’s cloud-based software attempts to explain how AI decisions are made and to detect any bias in their process. The service, which is also being offered as an open source tool, also offers suggestions to correct any bias it encounters.
Tech Giants Guilty of Bias and Error
Several cases of algorithmic and AI discrimination have come to light over the years. Twitter recently acknowledged that its algorithms had not always been impartialOpens a new window in surfacing tweets from political accounts, admitting that 600,000 accounts had been unfairly filtered.
The algorithm looked at the behavior of an account’s followers in deciding whether to highlight tweets or downgrade them, an automation the company says has now been corrected. In 2015, Google’s Photos app erroneously labeled a black couple as “gorillasâ€Opens a new window when automatically tagging pictures with its AI software. The app had also previously confused species, with pictures of dogs tagged as horses.
Police in the UK use AI to determine whether suspects are habitual criminals who should be denied bail and kept locked up. The service employs an algorithm via data including postcodes and previous offending behavior that has been flagged as potentially deepening institutional bias against certain types of people.
Inherited Bias
One source of AI bias could be the training data used. A job-hiring algorithm may be trained to seek out people who resemble previously successful candidates. But if the hiring system was already biased, the AI could inherit the bias towards white, male and middle class candidates.
So is AI simply automating the status quo? Evidence of reinforcing bias not only gives AI a bad name – such decisions can leave institutions and companies open to damaging legal action. Research by IBM among 5,000 business executives found that 82% of businesses are considering implementing AI to generate revenue, but 60% fear that AI could create liability issues.
How would IBM’s System Help?
The company cites the example of the data bias checker discovering bias against black people in a home loan data set. The analysis might find that the source of the bias was specifically against black women of a certain age.
A data scientist could then use the finding to add more data into that element of the data set to overcome the bias. IBM says its tool not only detects bias in the training data used for the AI but can also detect bias in the decision-making algorithm.
The system could offer businesses some protection against claims of negligence in discrimination cases. If they implemented a detection tool, it would show that steps had been taken to try to avoid bias in the data.
The service also allows users to examine the lineage and provenance of AI platforms and how they are created, an aspect ever more important as legislation demands increasing transparency from AI. The European Union GDPR rules give citizens the right to demand an explanation of how AI makes a decision about them and to appeal against decisions made with no human involvement.
Machine Learning Watchdog
Tech giants are taking this issue seriously: Microsoft and Facebook both are working on bias-detection tools, and Google recently launched a “What-if†tool to help users see how machine learning systems are working.
Of course, human decisions are subject to bias, whether in job selection, offering loans or classifying people. AI promises that it offers an opportunity for automating away such human discrimination. But if biased data is fed into the black box, it should be no surprise that biased outcomes are produced. The new breed of bias-detection tools should help reduce the chance of discriminatory decision-making.
The performance and effectiveness of these tools will be closely watched. Otherwise businesses will feel the risks to reputation and legal liability of getting trapped in a software discrimination battle.