In a shocking turn of events, seasoned lawyer Steven Schwartz of Levidow, Levidow & Oberman finds his entire legal career at risk due to an unexpected reliance on an AI chatbot. ChatGPT, the popular AI chatbot developed by OpenAI, fabricated a series of fictional legal cases that Schwartz cited in a recent filing. This incident, which could potentially derail Schwartz’s career of three decades, serves as a cautionary tale about the potential downsides of relying solely on AI technology in the legal profession.

ChatGPT cited fictitious cases in order to fulfil the query generated by the Lawyer

The case at the center of this controversy is Mata v. Avianca, where a customer named Roberto Mata sued the airline after sustaining a knee injury from a serving cart during a flight. As Mata’s lawyers sought to counter Avianca’s attempt to dismiss the case, they submitted a brief citing several previous court decisions. Schwartz, representing Mata, turned to ChatGPT to supplement his own research and provide additional examples.

ChatGPT

Instead, ChatGPT generated a list of fictitious cases, including Varghese v. China Southern Airlines, Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines. These cases were entirely fabricated by the AI chatbot.

Both Avianca’s legal team and the presiding judge swiftly discovered that these referenced cases did not exist. In an affidavit, Schwartz admitted his unwitting reliance on the false information generated by ChatGPT. The lawyer even presented screenshots of his interactions with the chatbot, where he sought confirmation of the cases’ authenticity and received misleading responses.

This incident highlights the potential risks associated with blind trust in AI technology within the legal field. While AI tools can undoubtedly enhance legal research and efficiency, human oversight and critical thinking remain crucial in ensuring the accuracy and reliability of information.

As this case unfolds, it serves as a wake-up call for legal professionals to exercise caution when incorporating AI into their practice. The reliance on technology should always be complemented by human verification and a comprehensive understanding of the limitations and potential biases of AI systems. Failure to do so could have dire consequences, as illustrated by the unfortunate predicament of Steven Schwartz.

RELATED:

(Via)