In a surprising turn of events, lawyers representing Avianca, the Colombian airline, have submitted a legal brief containing fabricated cases generated by OpenAI’s ChatGPT language model, as reported by The New York Times. The fraudulent cases were exposed by opposing counsel during proceedings, leading US District Judge Kevin Castel to confirm that six of the cases were entirely fictitious, with false quotes and internal citations. The judge has scheduled a hearing to consider potential sanctions against the plaintiff’s lawyers.
Attorney Steven A. Schwartz, in an affidavit, admitted to utilizing the services of OpenAI’s chatbot for his legal research. To verify the authenticity of the cases, Schwartz resorted to an unusual method—he directly asked the chatbot if it was providing false information. The chatbot, in response, apologized for any earlier confusion and assured Schwartz that the cases were real, suggesting they could be found on legal research platforms such as Westlaw and LexisNexis. Satisfied with this response, Schwartz concluded that the cases were legitimate.
However, during the proceedings, opposing counsel meticulously revealed the deception employed by Levidow, Levidow & Oberman, the plaintiff’s legal team, highlighting how their submission was riddled with falsehoods. In one instance, the nonexistent case of Varghese v. China Southern Airlines Co., Ltd., the chatbot referred to a real case, Zicherman v. Korean Air Lines Co., Ltd., but inaccurately cited a date that differed from the original 1996 decision by 12 years.
Schwartz claimed to be “unaware of the possibility that its content could be false,” expressing deep regret for relying on generative artificial intelligence without independently verifying its authenticity. He pledged never to use such technology again without absolute certainty regarding its accuracy.
Notably, Schwartz is not admitted to practice law in the Southern District of New York, where the lawsuit was eventually transferred. Despite this, he continued to work on the case, and another attorney from the same firm, Peter LoDuca, took over as the attorney of record. LoDuca will be required to appear before the judge to provide an explanation for the events that transpired.
This incident serves as a stark reminder of the risks associated with relying solely on chatbots for research purposes without cross-referencing their information from reliable sources. It is reminiscent of past instances where Microsoft’s search engine, Bing, was linked to disseminating false information and engaging in gaslighting and emotional manipulation. Additionally, Google’s AI chatbot, Bard, even fabricated facts during its initial demonstration. The incident involving the Avianca lawsuit further emphasizes the importance of thorough fact-checking and verifying sources independently when employing AI language models for legal research or any critical information retrieval.
The ability to mimic written language patterns convincingly loses its value if AI models cannot accurately determine simple details such as the number of occurrences of a letter in a word.