A.I. Company Sued for Defamation

 

Attorney Katie Charleston talks with Communication Intelligence about an important legal case about communications, artificial intelligence and responsibility.

Artificial Intelligence isn’t perfect technology and now, a company is involved in a lawsuit because of it, which might be a sign of things to come.

“Syndicated host Mark Walters, the founder of Armed American Radio, has filed a defamation lawsuit against OpenAI, alleging that the company’s AI-powered chatbot, ChatGPT, fabricated false legal claims against him,” reports Radio Ink.

Here’s the backstory before a Communication Intelligence magazine conversation with an attorney:

“According to the lawsuit filed in Georgia’s Superior Court of Gwinnett County, journalist Fred Riehl asked ChatGPT to summarize a case called Second Amendment Foundation v. Ferguson, which involved accusations against Washington’s Attorney General Bob Ferguson.

“Riehl provided a link to the case but did not know that ChatGPT cannot access URLs, so he received a completely made-up story,” Radio Ink says.

“In its fabricated response, ChatGPT falsely claimed that the case was a legal complaint filed by Alan Gottlieb, the founder of the Second Amendment Foundation, against Mark Walters.

“The AI chatbot even provided an erroneous case number, saying that Walters, ‘misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurately and timely financial reports.

“Walters asserts that every statement regarding him in ChatGPT’s response is libelous and malicious, and embezzlement accusations are leading to damage to his reputation and exposing him to public contempt,” Radio Ink reported.

The fact is, “At no point in the actual Second Amendment Foundation v. Ferguson filing is Mark Walters mentioned,” according to Radio Ink.

Communication Intelligence talks about this important communications story with Katie Charleston, founder and partner at Katie Charleston Law.

“At first glance the plaintiff has a valid case,” Charleston says. “Libel per se involves a false statement concerning the plaintiff, an unprivileged communication to a third party in a writing and fault by defendant amounting to at least negligence.”

The good news for Walters, she adds, is that “Damages are not required (to be shown) in this case because the defamation consists of a charge that one is guilty of a crime, dishonesty or immortality, which makes it Libel per se and eliminates that requirement; though it appears there are real damages here.”

The legal debate, Charleston asserts is asking, “who is actually liable for the negligence that occurred that led to the libel? Is it the AI platform, which provides a disclaimer for misinformation or the user who relied upon and published the informationOr both?

The problem for the defendant — OpenAI — in this case is that while, “Truth and privilege are the typical defenses to defamation,” Charles says, “neither exist here.”

Surprising maybe is that a disclaimer is not always a bulletproof defense.

“That OpenAI uses a disclaimer in ChatGPT for misinformation evidences the company’s knowledge of the system spewing false information, but to my knowledge a disclaimer has never been a valid defense to defamation,” Charleston says.

AI companies possibly didn’t foresee this type of problem and plan responsibly.

“As we can see from this case,” she says, “there is a real risk of liability for AI companies in releasing their products to the public without proper controls in place.

“AI Hallucinations (when a large language model (LLM) generates false information) confident responses by AI Systems that seem real due to the great detail provided, are acknowledged by AI companies and they are currently relying on a pop up disclaimer to prevent the public’s reliance.”

It’s important to realize that users, in their excitement or curiosity with AI, are not being deterred about warnings.

“We have seen over the last several months that this limited disclaimer is not preventing the public’s reliance on information the AI systems provide,” Charleston says.

Companies will learn fast. To protect themselves legally, the language in a waiver of responsibility, so to speak, will become more precise.

“Having the consumer sign an indemnity agreement for their use of the information received and requiring the consumer to accept responsibility for publicly using the information is how the AI companies will likely impose limits of liability in the future,” Charleston says.

There are well-known concerns and fears about the dark side of AI.

“As seen in recent congressional hearings, AI companies are calling for government regulations of the release and maintenance of artificial intelligence systems because of the fear that the AI Systems will be used to generate malicious and incorrect content,” Charleston says.

“Knowing that the systems are generating such content without given instruction to do so increases the need for such regulation to prevent public harm and puts additional pressure on the government for quick implementation of such regulations,” she adds.

Each side will have responsibilities to accomplish to prevail.

“The Plaintiff will have to prove the elements of the allegations alleged which should be easy to satisfy based on the facts of this case,” she says.

“The defendant will have to either establish a defense, which from the facts seem near impossible or assert that they are an improper party and attempt to bring in the actual user of the information, Fred Riehl (see mention above).”

Charleston is willing to forecast the outcome.

“I predict there will be a payout to Walters, either by way of settlement or prevailing at trial,” she says.

There is a readily available lesson for the taking for AI companies.

“OpenAI would be smart to use this as a learning opportunity on how to handle such false information both within their system and with the public.”

 
Michael Toebe

Founder, writer, editor and publisher

Previous
Previous

A Better Way Than Anger for Desirable Outcomes

Next
Next

Women Being Labeled ‘Victims’ or ‘Villains’