Ethical AI May Not See Widespread Adoption by 2030: Pew Research

essidsolutions

Ever since the development of artificial intelligence (AI), there has been a cry for laying down codes of conduct for designing AI systems to ensure they respond to situations ethically. However, according to a recent studyOpens a new window by Pew Research Center and Elon University’s Imaging the Internet Center, experts doubt that the industry may broadly adopt ethical AI design in the next ten years.

The study surveyed 602 researchers, tech innovators, policy and business leaders, and activists. The primary question posed was, “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” To this about 68% of the respondents held that ethical principles primarily focused on the public good will not be implemented in most systems by 2030. About 32% felt that ethical principles will be implemented by 2030.

Further, a significant number of respondents were concerned that AI’s evolution by 2030 will be mainly focused on social control and optimizing profits. They also believed that stakeholders will find it difficult to reach a consensus about what AI ethics may look like. At the same time, they also hoped that progress will be made as the technology spreads and shows its value. Moreover, societies have always identified ways to mitigate the problems created by technological evolution. 

Concerns About AI

The respondents also had a few troubling concerns about AI:

Also read: Confronting The Risks of Artificial Intelligence Technology

1. It Is Difficult To Define Ethical AI

Some of the respondents said it is difficult to define, implement, and enforce ethical AI. They said context is important when it comes to ethical considerations. Attempts to fashion ethical rules could lead to countless scenarios in which applying those rules could be messy. Further, both good and bad actors could exploit the loopholes and gray areas where rules are not well-defined. Hence, patches, workarounds, and remedies are created with varying levels of success.

2. Governance Is a Concern

Some of the respondents even have governance-related concerns. They asked questions like, “Whose ethical systems should be applied?” “Who gets to make that decision?” “Who has the responsibility to care about implementing ethical AI?” “Who might enforce ethical regimes once they are established? How?”

3. Economic and Geopolitical Competition Are Main Drivers

A significant number of respondents said that economic and geopolitical competition are the key drivers for AI developers today. Consequently, moral concerns take a back seat. Some experts believe that AI tool creators work in groups with almost no incentive to design systems addressing ethical concerns.

4. AI Design Is Proprietary, Hidden, and Complex

Some say that even if workable ethics requirements are established, they cannot be applied or governed because most of the technology design is proprietary, complex, and hidden. Some experts also feel that existing AI databases and systems are often used to build new applications. This means the ethically troubling aspects and biases of the current systems will be designed into the new system.

Also read: Realizing the Full Potential of Artificial Intelligence and Automation

Hopes About Ethical AI Development

While many of the respondents expressed worries, some of them even expressed hopes about AI development. Both sets of participants specifically agree that AI will continue to provide a few benefits to humanity. There will be a future where more applications will be developed to make people’s lives safer and better. Specifically, there could be medical and scientific breakthroughs that will help people live healthier lives. Some believe that the technology will expand positively to assist humans.

Respondents who are hopeful about the implementation of ethical AI had the following few points to make:

  • In the past, ethics has evolved as new technologies become mature and embedded in cultures; adjustments arise as problems arise.
  • Fixes may roll out in different ways along different times in different domains.
  • Expert panels concerned with ethical AI are being convened worldwide.
  • Social upheavals arising due to the technology’s problems may drive it closer to the top of human agendas.
  • AI itself may be used to evaluate the impact of technology and eliminate unethical applications.
  • A new generation of technologists trained in ethical thinking will pave the way to positive progress in society. Further, the public will also become wiser about the disadvantages of being code-dependent.

Expert Thoughts and Ideas

Besides the hopes and concerns, a few respondents introduced novel ideas, wrote about leading themes, and shared thoughts not widely discussed by others.

Jonathan Taplin, author of, “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined DemocracyOpens a new window ,” noted, “Even if most major players in AI will abide by ethical rules, the role of bad actors can have outsized effects on society. The ability to use deepfakes to influence political outcomes will be tested.”

“What worries me the most is that the substitution of AI (and robotics) for human work will accelerate post-COVID-19. The political class, with the notable exception of Andrew Yang, is in total denial about this. And the substitution will affect radiologists just as much as meat cutters. The job losses will cut across classes.”

Bill Dutton, professor, media and information policy, Michigan State University, noted, “AI is not new and has generally been supportive of the public good, such as in supporting online search engines. The fact that many people are discovering AI as some new development during a dystopian period of digital discourse has fostered a narrative about evil corporations challenged by ethical principles. This technologically deterministic good versus evil narrative needs to be challenged by critical research.”

A research scientist working on AI innovation with Google saidOpens a new window , “There will be a mix. It won’t be wholly questionable or ethical. Mostly, I worry about people pushing ahead on AI advancements without thinking about testing, evaluation, verification and validation of those systems. They will deploy them without requiring the types of assurance we require in other software. For global competition, I worry that U.S. tech companies and workers do not appreciate the national security implications.”

Also read: Twitter’s Algorithm Failure Shows Big Tech’s Ongoing Struggle With AI Bias

Could Quantum Computing Aid Ethical AI?

While quantum computing (QC) is still in its early development stages, can it be used in the future to support ethical AI systems development? To this question, most of the responses were speculative in nature as QC is still nascent. In fact, even the developers of quantum computing are somewhat unsure about its possible future applications. Having said that, some respondents do believe that QC could give a boost to AI ethics systems.

For example, Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist, Google, commented, “There is some evidence that quantum methods may be applicable to ML systems for optimization for example. But it’s early days yet.”

Eric Knorr, a pioneering technology journalist and editor in chief, IDG, observed, “Yes, computing power a magnitude greater than currently available could raise the possibility of some emulation of general intelligence at some point. But how we apply that is up to us.”

Genevieve BellOpens a new window , director, 3A Institute, Australian National University, asks the following six questions about the future of AI.

Our Take

AI has undoubtedly benefited humans in many ways. It has and continues to simplify our lives by automating repetitive tasks where humans can sometimes lag. However, the concerns about the negative side of this technology are genuine. The dystopian future of the Terminator and Skynet may sound a little far-fetched, but people are concerned. In 2018, Elon Musk warnedOpens a new window that AI could become “an immortal dictator from which we would never escape”.

Closer to reality, there are other concerns, such as AI bias. For example, Twitter’s photo cropping algorithm exhibited AI bias last year. The system favored women over men by 8%. It also favored images of White people over Black individuals by 4%. The social media giant had to even concedeOpens a new window that cropping a photo on its platform was best left to people instead of algorithms.

Similarly, AI can exhibit bias when hiring a candidate or promoting a staff member to higher positions. For example, in 2019, recruiting platform HireVue drew criticismOpens a new window from researchers and professors for using AI to predict how likely a candidate would be successful at work. The non-profit organization Electronic Privacy Information Center (EPIC) filed a complaint against the company with the U.S. Federal Trade Commission. It argued that HireVue’s AI-powered assessments were biased, unprovable, and not replicable. Earlier this year, HireVue had to drop facial monitoring-based screening.

Despite such concerns and criticisms, many organizations do not seem to have done much in this regard. A Capgemini reportOpens a new window showed that only 53% of organizations have a leader responsible for AI system ethics. Hence, we believe that it is essential to establish leadership in an organization to address these types of concerns and create ethically strong AI systems. Further, it is also necessary that these leaders are held accountable for the ethical outcomes of such systems and applications.

Also read: Artificial Intelligence, Hiring, and Biases: What Your Team Needs to Know

Developments in AI technology are happening at breakneck speed and there is no indication of AI slowing down. Having said that, there are valid concerns surrounding the technology, given its massive potential for both good and bad. This is what has led to the discussions of ethical AI for the past few years. Yet, implementing AI ethics is easier said than done. Organizations that make the necessary investment, however, will see mitigated risks. Most importantly, they will be what their customers and stakeholders look for: being trustworthy.

Do you think ethical AI can be implemented within the next decade? Do let us know on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window .