AI Regulation: A Step Forward or Ethics Washing?

essidsolutions

While the calls for new, AI-specific regulation have a valid place as part of the discussion into the future of AI, they should not come at the expense of other mechanisms that protect the public, says Ravit Dotan, responsible AI advocate at Bria. General laws, such as non-discrimination laws, apply to AI and should be enforced on a greater scale as soon as possible. In addition, calls for regulation should not be used as a way to pass the buck to policymakers. 

Organizations that develop and deploy AI can and should be proactive in ensuring that their technology is safe and beneficial, even if regulation and enforcement have not yet caught up to the technology.

Sam Altman, CEO of OpenAI, testified in a US Senate judiciary hearing on AI oversight recently, where he advocated for AI regulation, garnering significant media attention. While it’s important to include calls for regulation in the AIR discourse, they should not come at the expense of other mechanisms that safeguard the public. 

It’s crucial to enforce general laws, such as non-discrimination laws, on a larger scale to address AI-related issues as soon as possible. In addition, calls for regulation should not be used as a way to pass the buck to policymakers. Organizations that develop and deploy AI should take proactive measures to ensure the safety and benefits of their technology, even as regulation and enforcement catch up. 

Calls for Regulation as a Form of Ethics Washing

Calling for regulation without holding oneself accountable is an empty gesture that can create a false appearance of responsibility. At the hearing, Altman repeatedly called for regulators to intervene when Senators raised concerns. He acknowledged that “it’s important that companies have their own responsibility here no matter what Congress does”. However, his silence on what OpenAI does to keep itself accountable suggests otherwise. The company’s own recent AI Safety statementOpens a new window suggests responsibility shifting of the same kind.

While OpenAI’s statement addresses topics such as learning from real-world examples, privacy, protecting children, and accuracy, it overlooks significant issues. For example, the statement fails to mention copyright and intellectual property (IP). Additionally, bias and fairness issues, which have been observed in experiments with ChatGPT, are not mentioned in the statement at all. The statement also disregards the risks of plagiarism and fraud, even though ChatGPT’s capabilities make fraudulent activities more accessible, shaking whole sectors to the core. Despite estimations pointing to these concerns, the statement remains similarly silent on the environmental costs of ChatGPT, such as high carbon emissions and water consumption. The list goes on.

At the end of the statement, OpenAI implies that their safety efforts may be insufficient. Rather than striving for a higher standard themselves, they advocate for regulation to prevent others from taking shortcuts:

“While we waited over 6 months to deploy GPT-4 in order to better understand its capabilities, benefits, and risks, it may sometimes be necessary to take longer than that to improve AI systems’ safety. Therefore, policymakers and AI providers will need to ensure that AI development and deployment is governed effectively at a global scale, so no one cuts corners to get ahead.”

Companies of OpenAI’s caliber, with their power, wealth, and influence, have a responsibility to prioritize public safety in their technology. While risk mitigation is a collective effort, it is unacceptable to pass the buck to regulators when so much is at stake. If ensuring your product is safe takes more than six months, then take more than six months before you release a more advanced version.  

Diverting Attention From Existing Laws

In addition to shifting the responsibility to regulators, calls for AI regulation can also divert attention from the existing laws that companies may be breaking. Laws on topics such as privacy, copyright, and IP get some attention, but they are only the tip of the iceberg. 

Non-discrimination laws, for example, are technology-agnostic; they forbid discrimination regardless of the underlying technology. US federal agencies have released statements that emphasize this fact. Examples include the recent joint statementOpens a new window by four federal agencies, FTC, DOJ, CFPB, and EEOC, and previous statements by the FTC and the CFPB. Litigation efforts prove the same point. For example, the Department of Justice sued Meta (then Facebook) for violating the Fair Housing Act. The DOJ argued that discrimination brought about by Meta’s ad algorithm is illegal. Meta has agreed to change its algorithm as part of the settlement. Given the biased assumptions it makes, commercial usage of ChatGPT may lead to the violation of non-discrimination laws in similar ways. 

Existing laws were briefly mentioned at the hearing, but they deserve much more attention. Unsurprisingly, tech companies are not calling for regulators to keep them accountable under the current laws. Nevertheless, it’s critical to prioritize the enforcement of existing laws rather than solely rely on new regulations; these laws can ensure our safety and hold tech companies accountable more promptly.

Responsibility Beyond Regulation

Calls to create new, AI-specific laws may create the false impression that AI is outside the scope of current laws. They can also perpetuate the narrative that tech companies are only responsible for what is legally mandated.

We should not buy into this narrative. Tech companies bear the responsibility for the adverse effects of their technology, regardless of its legality. As a society, we have the power to exert pressure on these companies to fulfill their responsibilities.

Do you think AI regulation needs to be made tighter and companies made to slow down and focus more on AI-human responsibility? Share with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON AI REGULATION