Morgan & Morgan, a prominent US personal injury law firm, recently issued a warning to its team of over 1,000 attorneys about the dangers of relying on artificial intelligence for legal research. The firm emphasized that AI-generated case law can be entirely fabricated, and submitting false information in court filings could result in termination, reported news agency Reuters.


The urgency of this advisory came after a federal judge in Wyoming considered sanctioning two Morgan & Morgan lawyers who cited non-existent case law in a lawsuit against Walmart. One of the attorneys admitted to using an AI tool that produced fictitious legal references, calling it an unintentional error and apologizing in court filings last week.


ALSO READ | Hours After Grok 3 Announcement, Elon Musk's X Nearly Doubles X Premium+ Plan Price. Here's How Much It Costs In India


Lawyers Questioned For Using AI-Generated Legal Citations


The issue of AI-generated misinformation in legal cases has surfaced in multiple courts across the US over the past two years, with at least seven instances of lawyers being questioned or disciplined for using AI-generated legal citations. The Walmart case is particularly significant because it involves a high-profile law firm and a major corporation. However, similar incidents have been appearing in various lawsuits since AI-powered chatbots like ChatGPT became widely accessible, posing new challenges for attorneys and judges alike.


As of now, the judge has not yet decided whether the lawyers in the Walmart case will face disciplinary action. The case itself revolves around allegations of a defective hoverboard toy.


The rapid development of generative AI is transforming legal research and brief drafting, significantly cutting down the time lawyers spend on these tasks. As a result, many law firms are either partnering with AI providers or creating their own AI-powered tools. A survey conducted last year by Thomson Reuters, the parent company of Reuters, found that 63 per cent of lawyers had used AI in their work, with 12 per cent incorporating it into their practice regularly.


Despite its advantages, generative AI is prone to fabricating information—a phenomenon known as "hallucination." Legal experts caution that AI models generate responses based on statistical probabilities rather than verifying facts, meaning lawyers must be diligent when using AI-generated content.


Ethical rules for attorneys require them to thoroughly review and take full responsibility for their court filings, even if an error was caused by AI. The American Bar Association reminded its 400,000 members last year that these professional obligations apply regardless of whether misinformation is unintentional or AI-generated.


Andrew Perlman, dean of Suffolk University Law School and an advocate for AI in legal work, emphasized that while research tools are evolving, the legal profession's accountability standards remain unchanged. Perlman said, "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple."


How Has This Been Dealt In The Past?


One of the first major judicial reprimands regarding AI use in legal filings came in June 2023, when a federal judge in Manhattan fined two New York lawyers $5,000 for referencing fabricated case law in a personal injury lawsuit against an airline. Similarly, another New York judge last year considered imposing sanctions in a case involving Michael Cohen, former attorney for Donald Trump. Cohen admitted to mistakenly providing his lawyer with AI-generated case citations, which were then included in court filings related to his criminal tax and campaign finance case.


Although Cohen and his attorney avoided penalties, the judge described the incident as "embarrassing." In a separate case, a Texas federal judge in November ordered a lawyer to pay a $2,000 fine and complete a course on generative AI after citing nonexistent cases in a wrongful termination lawsuit. More recently, a Minnesota federal judge criticized a misinformation expert for damaging his credibility after he unknowingly cited AI-generated fake legal references in a case related to a "deepfake" parody of Vice President Kamala Harris.