Attorneys Fined $5,000 for Use of ChatGPT that Invented Precedents

A U.S. judge sentenced two lawyers to pay a $5,000 fine for filing a court brief using the popular artificial intelligence (AI) tool ChatGPT that made up a series of non-existent legal precedents.

Source: CE Noticias Financieras | Published on July 5, 2023

ChatGPT lawsuit

A U.S. judge sentenced two lawyers to pay a $5,000 fine for filing a court brief using the popular artificial intelligence (AI) tool ChatGPT that made up a series of non-existent legal precedents.

Judge Kevin Castel finds that attorney Steven Schwartz and his partner Peter LoDuca “consciously missed” signs that the cases ChatGPT had included were false, and that they offered “misleading” statements to the court, and therefore finds that they acted in bad faith.

The judge considers that while there is “nothing inherently improper in the use of a reliable artificial intelligence tool as an assistant”, the rules “impose a control function on lawyers to ensure the accuracy of their statements”.

The ruling also emphasizes that both “disregarded responsibilities when they submitted non-existent court opinions accompanied by false quotes created by ChatGPT’s artificial tool and continued to maintain those false opinions after court orders cast doubt on their existence.”

The two were working on a lawsuit against Avianca Airlines filed by a passenger who claims he suffered an injury when he was struck with a service cart during a flight.

Schwartz, who represented the plaintiff, used ChatGPT to draft a brief opposing a defense request that the case be dismissed.

In the 10-page document, the lawyer cited several court decisions to support his thesis, but it was soon discovered that the company’s well-known OpenAI chatbot had made them up.

“The Court is faced with an unprecedented situation. A filing forwarded by plaintiff’s counsel in opposition to a motion to dismiss (the case) is replete with citations to nonexistent cases,” Judge Kevin Castel wrote at the time.

The attorney himself submitted an affidavit in which he admitted to using ChatGPT to prepare the brief and acknowledged that the only verification he had conducted was to ask the app whether the cases he cited were real.

Schwartz justified himself by asserting that he had never used such a tool before and was therefore “unaware of the possibility that its content could be false.”

The lawyer stressed that he had no intention of misleading the court and totally exculpated another lawyer of the firm who is also exposed to possible sanctions.