Since the advent of large language models, the media and experts have been sounding the alarm: The content generated may be wrong. This message has not been heard everywhere.
A New York attorney cited cases that didn’t exist using ChatGPT, not knowing it was making things up.
The case in question involves a man suing an airline for alleged personal injury because he was struck in the knee by a service cart during a flight.
The airline filed a motion to dismiss the case. The plaintiff’s legal team wanted to use ChatGPT to argue their client’s position.
ad
Tea charging document filed by the legal team cites examples from non-existent legal cases made up by ChatGPT, with lengthy and seemingly accurate citations such as “Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019).”
The document dates from early March, so the made-up parts were generated using GPT-3.5, which is even less reliable than GPT-4 in terms of factual accuracy.
Bogus judicial decisions with bogus quotes and bogus internal citations
The court requested copies of the cases, and the legal team again asked ChatGPT for details about the cases. ChatGPT was not slow to respond, inventing numerous details about the fictitious cases, which the legal team attached to the indictment in the form of screenshots – including the ChatGPT interface on a smartphone (!).
The airline’s lawyers continued to dispute the authenticity of the cases, prompting the judge to ask the plaintiff’s legal team to comment again.
The Court is presented with an unprecedented circumstance. A submission file by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. […] Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.
In a subsequent affidavit, Steven A. Schwartz, the attorney responsible for the research, admits to using ChatGPT for the first time for the research without being aware that the system could generate false content. ChatGPT had “assured” the accuracy of its sources when asked.
Recommendation
Lawyer regrets using ChatGPT
Schwartz has been in the business for more than 30 years. In a statement, he apologized for using the chatbot. He said he was unaware that the information generated could be false. In the future, he said, he would not use AI as a legal research tool without verifying the information generated.
Aim OpenAI’s chatbot made it easy for Schwartz to believe the lies: screenshots show conversations between Schwartz and ChatGPT in which the lawyer asks ChatGPT for sources for cited cases and the system promises a “double-check” in legal databases such as LexisNexis and Westlaw. However, it is Schwartz’s responsibility not to rely on this information without verification.

The lawyer and his colleagues will have to justify in court on June 8th why they should not be disciplined. It is possible that the research for precedents with ChatGPT will itself become a precedent for why ChatGPT should not be used for research, or should not be used without authentication – and a warning to the industry.