Some background. OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership. OpenAI conducts AI research with the declared intention of promoting and developing a friendly AI
The very first time. The first of its kind, ChatGPT owner OpenAI is facing a defamation lawsuit by an Australian Mayor. The startup acknowledged providing false data on bribery scandals on its ChatGPT 3.5 device.
I am going to tell you something you do not want to hear. ChatGPT claimed that the Mayor of Hepburn Shire Council, Northwestern Melbourne, Australia, Mr. Brian Hood, served time in prison over a bribery scandal that took place while he was working for a subsidiary of Australia's National Bank. Actually, he did not. Not only has he never been to prison, but also Mr. Hood was the whistleblower who helped to take down the actual guilty parties in that bribery scandal. Besides, he was never charged with any crime.
He says; they say. Mr. Hood says that he was very surprised to learn that ChatGPT produced a misleading answer and he was "horrified" to see what ChatGPT was telling people: “I was stunned at first that it was so incorrect," he told an Australian broadcaster ABC News.
Therefore, in its public blog about the tool, OpenAI says that it "sometimes writes plausible-sounding but incorrect or nonsensical answers". Pointly, when people use ChatGPT, they are shown a disclaimer warning that the content it generates may contain "inaccurate information about people, places, or facts". In this case, it was just a little more than simply "inaccurate". Moreover, one of the biggest challenges with generative AI systems is the concept of hallucinations, which is the generative AI’s ability to make up information that seems true but is not. This is precisely what happened in this case; probably, as an author had said, “ChatGPT is impressive at parsing and generating English sentences, but it has a problem with facts”.
Law has its (rather not conclusive) word. The case is a stark reminder of both the current shortcomings of AI and the devastating damage that they inflict on real people. Furthermore, from a legal point of view, the key issue is whether the owner of a chatbot gpt can be sued for libel. According to Harvard Law’s Laurence Tribe, a claim would be plausible, arguing the law does not care whether libel is generated by a human or by a machine. However, The University of Utah’s Ron Nell Andersen Jones said the law was written to apply to a defendant with “a state of mind.” That means AI I would have to know the output was false, or wrote a response with reckless disregard for whether it was true, a difficult standard to apply to an inanimate tool. Finally, a group of Stanford professors explains that the defamed party likely would start with the AI’s owner. The owner, in turn, would try to shift blame to the device’s manufacturer, “arguing that it was designed in a way that made it dangerous…The truth of the matter often is likely to be quite unclear.”
A matter of time -- and a classic quote. In my opinion what is remarkable is that this case is only the latest example of a growing list of AI chatbots publishing false allegations about people. Thus, those same people may first recall Cicerón´s words: “When, O Catiline, do you mean to cease abusing our patience?”, and then go to fill a complaint…
“ChatGPT: Mayor starts legal bid over false bribery claim”.
“So you’ve been defamed by a chatbot”, by Ben Schreckinger. Politico, https://www.politico.com/newsletters/digital-future-daily/2023/04/06/so-youve-been-defamed-by-a-chatbot-00090874
“Mayor mulls defamation lawsuit after ChatGPT falsely claims he was jailed for bribery”, By Sophia Khatsenkova & Natalie Huet with Reuters https://www.euronews.com/next/2023/04/07/why-does-chatgpt-make-things-up-australian-mayor-prepares-first-defamation-lawsuit-over-it
Kavinsky - “Nightcall” https://www.youtube.com/watch?v=MV_3Dpw-BRY
“Cicero's First Speech Against Catiline”, Penguin Classics edition, translation and comments by Michael Grant
ChatGPT Libeled Me. Can I Sue?‘I am programmed to provide objective and factual responses,’ it claims, not under oath”, by Ted Rall. https://www.wsj.com/articles/chatgpt-libeled-me-can-i-sue-defamation-law-artificial-intelligence-cartoonist-court-lawyers-technology-14086034
“Could Popular AI Chatbot Talk Users into Legal Peril?”, by David Hoppe. https://gammalaw.com/could-popular-ai-chatbot-talk-users-into-legal-peril/