Artificial Intelligence: Too Dangerous for Human Hands?

essidsolutions

Hopes that artificial intelligence would usher in a new era of improved health care were severely blunted this week as it was revealed that IBM Watson had frequently suggested unsafe and incorrect treatments for cancer patients.

Amazon’s Rekognition facial identification software also incorrectly matchedOpens a new window 28 US Congress members with criminal mugshots, diminishing expectations for more efficient public services.

As Uber axed its self-driving trucks division this week to concentrate on its autonomous car development business – still reeling from a fatal accident earlier this year – and new research revealed significant concern about the accuracy (and wider application) of Google Translate, the limitations of artificial intelligence are coming sharply into focus. These setbacks may be dismissed as teething problems, but they raise serious doubts about AI’s readiness to be applied in real life.

Diagnostic Setbacks for AI

One of AI’s great promises has been to revolutionize cancer care: analysing patient data, tumors and therapies and applying machine and deep learning to suggest the most effective treatment plan. But documents obtained by website Stat reveal that medical professionals have grave misgivings about IBM’s Watson for Oncology software.

The internal documents detail presentations given in June by IBM’s former deputy health chief Andrew Norden, disclosing that Watson gave suggestions that were often inaccurate and raised serious questions about the underlying technology.

In one case, Watson recommended that a lung cancer patient with severe bleeding take a drug linked to hemorrhages, which is considered medically dangerous; another doctor reportedly labeled the product “a piece of s**t.”

Just Bad Advice?

The documents reportedly blame the poor performance on the data used to ‘train’ Watson to make its judgements. Because of the difficulties in obtaining real patient data, doctors created their own synthetic data: Watson’s treatment suggestions were therefore based on the preferences of the doctors who provided the data rather than analysis of actual results.

A spokeswoman for one of the hospitals involved in the Watson for Oncology trials says the criticism shows the robust processes in place for developing the software, adding that no technology can replace a human doctor’s knowledge about patients and conditions.

Getting the Names Right

Amazon Rekognition, a visual recognition engine that is being heavily marketed to law enforcement agencies for tracking down criminals, has meanwhile been put through its paces by the American Civil Liberties Union. The organization downloaded 25,000 criminal mugshots, and asked the Rekognition software to match them with photos of 535 Congress members.

The system falsely matched 28 of the politicians with criminals, with the ACLU also noting that the results were skewed towards black Congress members.

The civil liberties body has campaigned against the use of Rekognition by police departments and has warned that facial recognition risks fueling racial discrimination and poses a risk to human rights.

In its test, ACLU set Rekognition to 80% confidence in facial matching, an error margin that Amazon argues is only appropriate for use with mundane items, like furniture or pets. For recognizing specific faces, it says a 95% confidence level should be used. Amazon says Rekognition should only be used as a guide, and a human should always make the final judgement about matching a face.

Artificial Incompetence

Both the IBM Watson for Oncology and Amazon Rekognition failures come down to AI bumping up against the real world, where trustworthy data is scarce and human fallibility is in large supply. Tech companies, governments and the media have made grand predictions about the rosy future of AI.

But their promises must be taken in perspective: Those tasked with using the technology need to keep their feet firmly on the ground. As AI is embedded into many of society’s most sensitive and important functions, it has become simultaneously clear that this is a technology that needs a large warning sign hung round its neck.