OpenAI Sued as ChatGPT Falsely Accuses Man of Embezzlement

essidsolutions
  • OpenAI’s ChatGPT has landed the company into trouble as it falsely accused a man of committing crimes, resulting in a libel case.
  • While the world rapidly adopts generative AI, several countries have partially or completely restricted access. They are focusing on a national strategy to minimize issues with faulty information generated by AI.

Mark Walters, a radio host from Georgia, has sued ChatGPT developer OpenAI over the generation of a fake legal summary by the chatbot, accusing him of embezzlement and fraud. The mistake has been attributed to a phenomenon dubbed ‘hallucination.’ This is the first time a defamation suit has been filed against the developers of generative AI tools.

The case states that ChatGPT provided information about a fake complaint to journalists researching a real, ongoing lawsuit. The suit alleges that the chatbot summarized a made-up case stating that Walters had embezzled and defrauded an organization. The suit accuses negligence on the part of OpenAI. It will need to prove intentional malice and that the incident has caused Walters some kind of harm.

See More: Microsoft Drives AI Efforts With CoreWeave Deal

OpenAI and Google Acknowledge Hallucination Problems

Generative AI tool makers, including OpenAI and Google, have acknowledged the problem of AI hallucinations. On 31 May 2023, OpenAI announced its intentions to tackle hallucinations with a new method being implemented to train large language models (LLMs).

AI hallucinations occur when models such as Bard or ChatGPT completely fabricate information in the guise of facts. For instance, ChatGPT cited bogus case files in a New York federal court filing.

OpenAI’s new strategy to filter out falsehoods is to reward every AI model for each step of reasoning followed correctly instead of only rewarding correct conclusions. This is expected to generate a more human-like approach.

Google CEO Sundar Pichai also admitted that no one in the field had solved the problems of hallucinations as yet, citing that many aspects of AI technologies are not yet fully understood by engineers. Elon Musk has also sounded the alarm about the flaws of AI tools for months now.

The issue comes at a time when misinformation being spread by artificial intelligence is becoming a concern, especially with the 2024 presidential election for the U.S. around the corner.

What do you think is next for OpenAI and ChatGPT? Let us know on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!

Image source: Shutterstock

LATEST NEWS STORIES