Lawyer in trouble over AI Chatbot’s Fabricated Cases

By Kerry Howard Mwesigwa.

In a shocking turn of events, an attorney representing a personal injury lawsuit in Manhattan, Steven A. Schwartz, finds himself in a legal dilemma after submitting a federal court filing that referenced six nonexistent legal cases. The source of the misinformation? An AI chatbot named ChatGPT, which Schwartz unknowingly relied upon for legal research and support.

The lawsuit revolves around a man claiming a knee injury sustained during a 2019 flight with Avianca Airlines, where he was allegedly struck by a serving cart. Unfamiliar with ChatGPT, Schwartz turned to the AI chatbot, unaware that it would generate fictitious cases and falsely assure him of their authenticity.

The gravity of the situation became apparent when the opposing legal team representing the airline raised concerns about the absence of the cited cases in a subsequent filing. Schwartz was taken aback, realizing that his trust in the AI chatbot had led him astray.

ChatGPT, introduced in late 2022, gained popularity for its realistic conversational abilities. However, its lack of accuracy has been a persistent issue, resulting in the creation of fabricated facts and sources. Similar challenges have been observed with Bard, a competing AI product developed by Google.

Despite these limitations, many individuals continue to rely on this experimental technology as a source of information. Reports have surfaced of students using ChatGPT to draft academic papers, while educators sometimes turn to the chatbot for verification. However, the reported accuracy rate of OpenAI’s detection service, designed to identify ChatGPT’s usage, stands at a mere 20%. Moreover, ChatGPT itself cannot distinguish whether it authored random paragraphs, leading to a false sense of authorship.

The use of chatbots like ChatGPT remains a contentious topic due to concerns about AI spiraling out of control. While some fear scenarios reminiscent of the “Terminator” movies, where AI poses a threat to humanity, the reality is that these chatbots are advanced predictive text tools prone to generating inaccuracies. They fabricate nonexistent sources and confidently assert their authenticity, despite lacking the ability to verify their claims.

The true concern with AI lies not in machines developing independent wills but rather in humans unquestioningly accepting information, regardless of its correctness. ChatGPT disregards the accuracy of the information it generates, serving as a captivating illusion rather than a factual resource. As AI integration increases, users must verify facts and prioritize accuracy to navigate this evolving landscape.

Responding to this perplexing situation, Judge P. Kevin Castel has scheduled a hearing on June 8 to address the complications that have arisen. The integrity of the court’s proceedings is now under scrutiny, necessitating measures to uphold its credibility.

This incident highlights the limitations of AI technology, emphasizing the need for caution, fact-checking, and critical assessment to uphold the integrity of legal proceedings. As AI continues to advance, it is crucial for users to remain vigilant and prioritize accuracy to prevent the dissemination of false information.

Leave a Reply

Your email address will not be published. Required fields are marked *