AI Pioneer Dr. Geoffrey Hinton Warns Against AI Haste, Quits Google

essidsolutions
  • Dr. Geoffrey Hinton warned against the risk of hastily developed and marketed products and said he regrets his life’s work.
  • The AI pioneer quit Google this week after spending ten years with the company.

A premier expert on artificial intelligence warned against the risk from hastily developed and marketed products and says he regrets his life’s work. Dr. Geoffrey Hinton expressed this on his way out of Google, where he has worked for over ten years.

According to a New York Times report, Dr. Hinton resigned from his research role at the online search and advertising giant to “speak freely” on his concerns about the potential of AI being misused.

The Turing Prize winner joined Google post the $44 million acquisition of DNNresearch, a company he started. At DNNresearch, Dr. Hinton and two of his students leaped forward in neural networks by developing one which could analyze and identify similar objects in two images, such as pens, animals, etc.

Widely referred to as the “Godfather of AI,” Dr. Hinton has worked to develop AI since his days at the University of Edinburgh in the early 70s and has been involved, alongside other researchers, in the development of techniques such as backpropagation and the creation of AlexNet, which now form the basis of generative AI technologies.

Here’s what Dr. Hinton told the NYT:

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

“It is hard to see how you can prevent the bad actors from using it for bad things.”

However, he added:

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

Hinton’s apprehension about AI began last year as the Microsoft-backed OpenAI rolled out the interactive generative AI, ChatGPT. 2022 also marked the popularization of other generative AI tools, such as DALL E, the digital image generator. Dr. Hinton’s consternation stems from how easily AI tools are available to the public.

Dr. Hinton wasn’t one of the 1,000+ signees (including Yoshua Bengio, founder and scientific director at Mila who shared the Turing Prize with Dr. Hinton) to the open letter penned by the nonprofit Future of Life in March 2023 or the one by 19 Association for the Advancement of Artificial Intelligence members academics and Microsoft chief scientific officer Eric Horvitz.

See More: Putting Generative AI to Work Responsibly

The AI wizard took to Twitter to clarify that he doesn’t intend to criticize Google, publicly or otherwise, and that the company “has acted very responsibly.” He only wishes to bring the dangers of hurriedly engaging in a race to market and imprudently made AI products and services.

In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.

— Geoffrey Hinton (@geoffreyhinton) May 1, 2023Opens a new window

For instance, Dr. Hinton explained to CNBC how GPT-4, the successor to ChatGPT’s underlying large language model (LLM), GPT-3.5, can outsmart humans. “If I have 1,000 digital agents who are all exact clones with identical weights, whenever one agent learns how to do something, all of them immediately know it because they share weights,” Dr. Hinton told CNBC.

“Biological agents cannot do this. So collections of identical digital agents can acquire hugely more knowledge than any individual biological agent. That is why GPT-4 knows hugely more than any one person.”

It should be noted that integrating such LLMs with critical infrastructure is where the anxiety lies. When asked what the “chances are of AI just wiping out humanity,” Dr. Hinton told CBS News, “It’s not inconceivable. That’s all I’ll say.”

Dr. Hinton’s response can be misconstrued as a science-fiction film-esque AI doomsday scenario. However, experts need to address the real risk right now: misinformation and the possibility of AI-automated systems causing job displacement somewhere down the line.

As such, the lack of a functionally viable governance model and regulatory framework for AI is the need of the hour, Dr. Hinton said.

Do you agree with Dr. Hinton’s opinions? Share your thoughts with us on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d love to hear from you!

Image source: Shutterstock

MORE ON ARTIFICIAL INTELLIGENCE