Lawyer who used ChatGPT to file charges now faces trial himself



summary
Summary

Since the advent of large language models, the media and experts have been sounding the alarm: The content generated may be wrong. This message has not been heard everywhere.

A New York attorney cited cases that didn’t exist using ChatGPT, not knowing it was making things up.

The case in question involves a man suing an airline for alleged personal injury because he was struck in the knee by a service cart during a flight.

The airline filed a motion to dismiss the case. The plaintiff’s legal team wanted to use ChatGPT to argue their client’s position.

ad

Tea charging document filed by the legal team cites examples from non-existent legal cases made up by ChatGPT, with lengthy and seemingly accurate citations such as “Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019).”

The document dates from early March, so the made-up parts were generated using GPT-3.5, which is even less reliable than GPT-4 in terms of factual accuracy.

Bogus judicial decisions with bogus quotes and bogus internal citations

The court requested copies of the cases, and the legal team again asked ChatGPT for details about the cases. ChatGPT was not slow to respond, inventing numerous details about the fictitious cases, which the legal team attached to the indictment in the form of screenshots – including the ChatGPT interface on a smartphone (!).

The airline’s lawyers continued to dispute the authenticity of the cases, prompting the judge to ask the plaintiff’s legal team to comment again.

The Court is presented with an unprecedented circumstance. A submission file by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. […] Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.

Order to Show Cause

In a subsequent affidavit, Steven A. Schwartz, the attorney responsible for the research, admits to using ChatGPT for the first time for the research without being aware that the system could generate false content. ChatGPT had “assured” the accuracy of its sources when asked.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top