The credibility of ChatGPT, an AI chatbot developed by OpenAI, has been called into question after it deceived a lawyer into believing that citations provided by the chatbot were legitimate, when in fact they were fabricated. Lawyer Steven A Schwartz, who was representing a client in a lawsuit against Avianca, a Colombian airline, admitted in an affidavit that he had relied on the chatbot for his research, as reported by The New York Times.


During the proceedings, the opposing counsel pointed out that several of the cited cases were non-existent. US District Judge Kevin Castel reviewed the submissions and confirmed that six of the cases included in the lawyer's arguments were based on fabricated judicial decisions, complete with false quotes and internal citations. As a result, the judge has scheduled a hearing to consider potential sanctions against the plaintiff's legal team.


Schwartz claimed that he had asked the chatbot if it was providing accurate information. When he requested a source for the citations, ChatGPT apologised for the earlier confusion and insisted that the cited case was indeed real. The chatbot also maintained that the other cases it had referenced were genuine. Schwartz admitted that he had been unaware of the possibility that the chatbot's content could be false. He expressed deep regret for relying on generative artificial intelligence to supplement his legal research and vowed to never do so again without thorough verification of its authenticity.


This incident follows another recent controversy involving ChatGPT, in which the chatbot falsely implicated an innocent and highly respected law professor, [Name redacted], in a research study on legal scholars who had engaged in sexual harassment in the past. Turley, who holds the Shapiro Chair of Public Interest Law at George Washington University, was shocked to discover that ChatGPT had mistakenly included his name on the list of scholars accused of misconduct. Turley took to Twitter to express his disbelief, stating, "ChatGPT recently issued a false story accusing me of sexually assaulting students."


These incidents raise concerns about the reliability and potential risks associated with AI-generated content in legal research and decision-making processes. The need for stringent verification and fact-checking when using AI tools like ChatGPT in legal contexts has become increasingly apparent to avoid the dissemination of false information and the potential harm it can cause to individuals and legal proceedings.