Confronting The Risks of Artificial Intelligence Technology

essidsolutions

The article discusses some broad and pressing issues around AI technologies. Regarding its implications in society and for businesses. Questions such as: What these issues are and why they are so critical? What do these mean for businesses? How can they manage these challenges? We also consider Explainable Artificial Intelligence as a step in the right direction.

As AI systems grow ever more mainstream, we are witnessing regular headlines and debates. These headlines bring to attention the wide-ranging opinions around the topic of AI. There is no shortage of blockbuster movies, painting a grim picture of the dangers of artificial intelligence.

The Cons of Artificial IntelligenceResearch conducted at Georgia University showed that AI systems displayed uniformly reduced performance when confronted with ‘dark’ pedestrians, creating fears for a future in which a world rife with autonomous cars isn’t as safe for people with dark skin tones as it is for lighter-skinned pedestrians.

In December 2019, Facebook took down 600 accountsOpens a new window , collectively with over 55 mn followers, that were using identities created by artificial intelligence to push pro-Trump stories on a variety of topics related to impeachment and elections.

As McKinsey states, AI is proving to be a double-edged sword with both the edges far sharper and less understood than with other technologies.

The Pros of Artificial IntelligenceAI is already hailed as the next big thing. With applications such as smart housing, next-gen automobiles, personal assistants, public surveillance, advanced healthcare, drones in logistics and fraud prevention in finance, the technology is already growing in use and acceptance.

80% of executives at companies who have actively deployed the technology report they have received moderate to high levels of return from their AI investments.

Considering Different Facets of Artificial IntelligenceWith artificial intelligence poised to become more prominent, it is perhaps time to consider the issue of ethics in AI, the pressing challenges and how we can ensure a positive outcome for humanity by delivering an AI model with confidence and in the best possible way.

Today, artificial intelligence technology based on neural networks and Deep Learning models use millions of parameters which create incredibly complex and highly nonlinear internal representations of the images or datasets that are fed to them. This is why they are considered ‘Black Box’ systems. So, the pressing issue now is to deliver transparency with this ‘black box’ AI.

Learn More: How to Turn Failed AI Initiatives into Real Business ValueOpens a new window

What Are the Broad Pressing Issues With AI in Society?

1. Increasing Inequality:

Many experts have expressed concerns about how the rise of robots and intelligent systems might lead to massive job losses, or they might create conditions where capital is accumulated in the hands of a few. The potential of automation technology to replace more expensive human labor in blue-collar jobs may give rise to the need to redeploy or retrain employees to keep them in other roles. In the future, we may be looking at more debates on universal wage programs to ensure that no one is left out in society.

2. What if Outcomes are Not Aligned with the Law?

This frequently comes up in Sci-Fi movies where a rogue system is found infringing upon the laws of society to pursue its stated objective. A famous incident is well depicted in the popular TV show Rick & Morty, where an intelligent system is tasked by Rick to protect Bert in his absence. As events unfold, the system kills a robber, police officers and finally takes the entire city hostage because it deems these things a threat to Bert’s safety. While you might argue that the scenario is far-fetched, it highlights a significant issue: it is impossible to codify ethical behavior.

3. Bias Leading to Incorrect Decisions:

AI systems are supposed to have a low error rate. After all, we don’t expect them to be affected by fatigue, boredom, resentment, or human biases, right? WRONG. There have been some very high-profile cases where the AI system has introduced bias into the system. A case in point here is when Amazon’s sophisticated AI-driven recruitment system started to show a tendency to prefer male candidates over female. AI systems are prone to biases, based on data inputs, algorithms or its human developer and can develop biases against races, genders, religions or ethnicities.

4. The legal standing of AI systems:

There may be more cases in point that with AI systems, what legal rights are they entitled to? For example, Sophia was given citizenship by Saudi Arabia. While it was primarily a PR stunt, it opens up an inevitable pressing question: who will be responsible for AI systems if there is a breach or infraction? If the machines and AI systems are indeed “autonomous”, can they be held accountable for any wrongdoing? Should a robot be charged if it violates a traffic light to arrive at an emergency on time? How do we persecute it? If the system is capable of making upgrades and improvements to its network by itself, without the need for its maker/owner’s intervention, then why should it not be recognized and held responsible for what it does? How to deter such a system? These are questions that we have to answer to ensure a future with responsible AI.

Learn More: AI: Let’s Get RealOpens a new window

Why Is the Issue so Critical?As we step into a future where AI is expected to play a role in every walk of life, it is imperative to consider the data that we have to extrapolate what the future is going to look like. Ethics in AI will provide a safeguard for meaningful innovation.

Bill Gates, famously compared AI technology to a nuclear bomb and stressed that without the appropriate knowledge, AI could become overwhelming and dangerous for society. To debate the question of whether an AI system can be relied upon to make “ethical” decisions, we may have to reconsider our definitions of “ethical” behavior first.

Do we have a sure way to define ethics first? How do we build an ethical outcome into logic? Are we willing to put the power of autonomy into systems without understanding these issues first?

How Are Businesses Grappling With These Challenges?Companies are deploying sophisticated artificial intelligence systems which can mimic human cognitive functions as well as perform extensive analyses and automation functions.

As AI systems are becoming smarter, they have become the modern-day Pandora box, and legitimate concerns need to be addressed before they create an unwanted scenario.

For businesses, the most visible challenges with AI are privacy violations, discrimination and accidents. These issues, if not well-handled, can cause damages to organizations. This can range from reputational and revenue losses to regulatory backlash, criminal investigation, and diminished public trust.

Organizations are struggling to answer this question: How can they ensure that their algorithms act responsibly and ethically?

Companies need to become aware of these perils before deploying their algorithms:

  • Define guidelines and set-up governance over the operations of their AI systems
  • Plan how to operationalize these guidelines
  • Discuss with your team and educate them

Learn More: How Do AI and Blockchain Complement Each Other?Opens a new window

Enter Explainable AIArtificial Intelligence models can often be a “Black Box”; hence the need for Explainable AI that provides insights into the data, decision points and techniques that were used is growing in importance. Explainable AI focuses on model interpretability as critical to our ability to optimize AI and solve the problem in the best possible way.

Google has taken a significant step in this direction by announcing Explainable AI:Opens a new window with features like “What If” and attribution modelling to help businesses deploy AI with confidence and streamline model governance.

Explainable AI, XAI for short, will need to deliver answers to some vital questions like:

  1. Why did the AI system give a specific prediction or take a course of action?
  2. Why didn’t it choose another course of action?
  3. When did the AI system succeed or fail?
  4. When do you trust decisions made by AI systems?
  5. How can the AI system correct its errors?

Explainable AI, along with streamlined governance, well-laid policies and education around ethics in AI will help us as a society to grapple with the growing concerns and risks associated with AI.

Let us know your thoughts about the risks of AI on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!