Google’s AI Dilemma Moves Ethics to Center Stage

essidsolutions

Artificial Intelligence and robotics have long been capable of terrifying the public. The idea of machines acting autonomously and making life-and-death decisions, whether on the roads with driverless cars, in medical treatment or through military action, is the stuff of nightmares.

That a machine makes such vital calls based on limited criteria and devoid of human judgement is problematic. The absence of reasoning, empathy and value-based discrimination in machines makes their decisions seem inhuman, unfair and probably unjust.

For AI to gain social and public acceptance – and become a driving force of the economy – a clear code of ethics is required regarding the activities of robotics and machine learning systems. Google has recently addressed this with a statement of principles about its use of AI.

Staff Rebellion

A blogOpens a new window post from CEO Sundar Pichai setting out Google’s AI ethics came in response to a rebellion by thousands of the technology giant’s staff against its AI work with the US military on visual recognition technology for drone warfare. Pichai says that Google will no longer work on weapons or technologies “whose principle purpose or implementation is to cause or directly facilitate injury to people.”

Google has announced internally that it will cease working with the military on Project Maven, which uses AI to identify targets for drone strikes. This followed the resignations of a dozen employees in protest at the project and a petitionOpens a new window signed by nearly 4,000 staff, demanding that Google withdraw from the project and promise to stop working on all warfare technology.

Pichai’s blog post goes some way to meeting those demands, although he says Google will continue to work with the military on cyber-security, training, recruitment, healthcare and search-and-rescue.

But the fact that staff have forced the company to drop a contract believed to be worth $250 million annually on a matter of principle says much about the ethical dilemmas involved in AI.

Learning from Mistakes

Opposition to AI-driven autonomous weaponry springs from the fear that weapons will eventually use machine learning to identify and destroy targets without human intervention. Machine learning is fallible – by definition, it learns from its mistakes. It uses probability and predictions to make decisions. The potential for targeting errors, mistaken identity and friendly fire incidents is significant.

Supporters of Project Maven argue that effective AI will over time improve identification of enemy targets, ultimately reducing civilian casualties. But the ethical quandaries over AI extend far beyond autonomous weaponry.

As techniques such as machine learning, natural language processing, visual recognition and predictive analytics spread more widely throughout society, fears that they will deliver unfair and incorrect results are growing.

In recruitment, an algorithm may learn that the most successful candidates historically have been white males from certain universities and therefore will train itself to seek out similar candidates, creating a cycle of discrimination. However, the counter-argument is that machines can be programmed to root out discrimination and that humans are more susceptible to hiring people who resemble themselves.

Artificial General Intelligence

So fears exist over the safety of autonomous vehicles, the spread of false information, risks to data privacy and the possibilities of mistakes and discrimination. And all that comes before considering the distant but inevitable ethical concerns over artificial general intelligence – machines capable of performing any intellectual task undertaken by humans.

These dilemmas add to the argument that the huge potential of AI requires careful consideration of its ethical and social dimensions. Pichai has attempted to address these concerns. In addition to ruling out the use of AI in weapons, he identifies seven “objectives for AI applications:”

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

“These are not theoretical concepts,” Pichai says. “They are concrete standards that will actively govern our research and product development and will impact our business decisions.”

Google’s about-face comes as institutions around the world race to develop codes for the ethical use of AI and personal data. Singapore has appointed an advisory council from the public and private sectors to develop a voluntary code of ethics. Germany’s government last year announced ethical guidelines for driverless cars – for example, if an accident is unavoidable the software should choose the action that will cause the least harm to people.

Japan’s government published a Robot Strategy in 2015, while Britain has announced plans for a £9 million data ethics and innovation center to promote the safe, ethical and innovative use of AI.

Channeling Asimov

Any code of practice for AI will probably refer to the Three Laws of Robotics drawn up in 1950 by science fiction writer Isaac Asimov in his book I, Robot:

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where they conflict with the first law.
  3. A robot must protect its own existence unless this conflicts with laws one and two.

Every company and institution involved in the development of AI and robotics needs clear and specific ethical principles. Only by winning the acceptance of stakeholders will AI earn a license to operate – as Sundar Pichai has discovered.