ChatGPT Used to Cite Bogus Cases in Lawsuit, Lawyer Faces Sanctions

ChatGPT Used to Cite Bogus Cases in Lawsuit, Lawyer Faces Sanctions

In a surprising turn of events, lawyers representing Avianca, the Colombian airline, have submitted a legal brief containing fabricated cases generated by OpenAI’s ChatGPT language model, as reported by The New York Times. The fraudulent cases were exposed by opposing counsel during proceedings, leading US District Judge Kevin Castel to confirm that six of the cases were entirely fictitious, with false quotes and internal citations. The judge has scheduled a hearing to consider potential sanctions against the plaintiff’s lawyers.

Attorney Steven A. Schwartz, in an affidavit, admitted to utilizing the services of OpenAI’s chatbot for his legal research. To verify the authenticity of the cases, Schwartz resorted to an unusual method—he directly asked the chatbot if it was providing false information. The chatbot, in response, apologized for any earlier confusion and assured Schwartz that the cases were real, suggesting they could be found on legal research platforms such as Westlaw and LexisNexis. Satisfied with this response, Schwartz concluded that the cases were legitimate.