How, and Why, to Take the Mystery Out of AI Decision-Making

essidsolutions

As machine learning tools become involved in ever wider decision-making, from approving bank loans to assessing cancer risks, demands for transparency are growing.

These vital decisions are often made by algorithms without anyone understanding how they arrived at them. Machine learning, then, is a black box technology that makes judgments based on complex statistical calculations and analysis of data.

Machine learning systems such as neural networks are fed huge amounts of data, then build advanced mathematical models to identify patterns from the data and make decisions. Even the designers of these systems have little idea of the exact processes behind the answers they give.

This lack of transparency is holding back the spread of the technology.

Many financial institutions are wary of implementing machine learning solutions because they cannot prove the decisions were made fairly about accepting or rejecting loans, mortgages or credit. Regulators demand transparency in decision making, which machine learning struggles to provide.

Explainable AI tools

Tech firms are racing to find ways to create what they call “explainable AI.”

This week, Google published researchOpens a new window in association with Stanford University about how to explain the predictions made by visual recognition systems. This follows last week’s launch of Facebook CaptumOpens a new window , a library tool created by its artificial intelligence research team to explain machine-learning decisions. IBM launchedOpens a new window AI ExplanabilityOpens a new window earlier this year, and Microsoft has open-sourced Interpret ML.

A significant challenge for explainable AI is assessing the importance given to different factors contributing to a decision. So in assessing a mortgage application, how much weight does the algorithm give to an applicant’s income, postcode, repayment history or size of family?

A machine learning tool will analyze millions of mortgage applications and look at the results over decades to identify those at the highest risk of defaulting. But it is unclear how it assesses the data and the weight it assigns to different factors in predicting those results.

A solution is to build what is known as “symbolic regression” into the algorithm so it notes the weight it assigns to each factor.

Another method of building explainable AI is to create a second algorithm that analyzes the behavior of the original algorithm to reveal the steps it took to make the decision. The difficulty here is that the observing algorithm itself must be explainable.

Bad decisions put patients at risk

A classic example highlighting the need to make machine learning transparent was a study of pneumonia patientsOpens a new window .

The algorithm was tasked with finding which victims of pneumonia had the highest risk of death so they could be placed at the front of the line to receive urgent treatment. But the machine learning system found that patients with asthma who contracted pneumonia had a lower chance of dying and wrongly asserted that they should go to the back of the line.

In fact, the reason they had a lower risk was because doctors immediately sent asthmatics with pneumonia to intensive care, so they had a higher survival rate. This shows the need to understand why machine learning makes a decision rather than simply following its advice.

Google’s research paper shows how a method known as Automated Concept-based Explanation, or ACE, gives explanations of visual recognition systems that can be understood by humans.

ACE analyzed one of Google’s visual recognition systems to see how it recognized different objects. It found that a police car was recognized mainly by its logo while a basketball game was recognized mainly by the players’ jerseys.

A long sentence, a faulty decision

Meanwhile, judges are also using machine learning predictions about the chances of an offender committing more crimes when they set bail or pronounce sentences.

The algorithm may predict that a criminal is highly likely to re-offend, in which case the judges may mete out a longer sentence.

In one case, though, a convict was given a long sentence based on a prediction of reoffending by a risk assessment tool called Compas. He challenged the sentence because he was not allowed to find out how the algorithm worked. However, a state appeals court upheld the sentence, denying that knowledge of how the predictions were made was relevant.

But as machine learning is widely applied for sensitive decisions, people will demand to know what lay behind those decisions. Explainable AI will be vital in breaking open the black box of machine learning and making the technology socially accepted.