US Lawyers Face Scrutiny Over AI-Generated Court Filings

Ethical guidelines require attorneys to validate and be accountable for their filings, even if errors arise from AI tools.

A recent survey found that 63% of lawyers have used artificial intelligence in their work, with 12% using it regularly.

However, AI’s role in legal proceedings has sparked controversy after multiple incidents of fabricated case citations in court filings.

Morgan & Morgan, a leading personal injury law firm in the U.S., recently issued an urgent warning to its over 1,000 attorneys about AI use in legal documents.

This comes after a federal judge in Wyoming considered sanctioning two lawyers for submitting a lawsuit against Walmart that included fictitious case references.

One of the attorneys acknowledged relying on an AI tool that generated non-existent cases, calling it an unintentional error.

This incident is part of a growing trend, with courts disciplining or questioning lawyers in at least seven cases over the past two years due to AI-related inaccuracies.

AI’s Legal Challenges: A Widespread Concern

The Walmart case has drawn attention due to the involvement of a major law firm and corporate entity.

However, similar instances have emerged across various lawsuits since AI-powered chatbots like ChatGPT became widely available. This has introduced new risks for legal professionals and the judiciary.

Morgan & Morgan and Walmart have not publicly commented on the case.

The judge has yet to decide on potential disciplinary action against the lawyers, whose lawsuit involves an allegedly defective hoverboard toy.

Despite concerns, generative AI is streamlining legal research and brief drafting, prompting many law firms to adopt AI tools or develop proprietary systems.

However, AI’s tendency to generate incorrect information—known as “hallucinations”—poses significant risks. Legal experts emphasize that lawyers must verify AI-generated content before submission.

Ethical guidelines require attorneys to validate and be accountable for their filings, even if errors arise from AI tools.

The American Bar Association has reminded its 400,000 members that these obligations extend to any misinformation, regardless of intent.

Andrew Perlman, dean of Suffolk University’s law school and an advocate for AI in legal practice, stressed the importance of due diligence. “When lawyers submit AI-generated citations without verifying them, it’s pure incompetence,” he stated.

Legal Repercussions and Court Reactions

In June 2023, a federal judge in Manhattan fined two New York lawyers $5,000 for citing fictitious AI-generated cases in a personal injury lawsuit against an airline.

In another case, Michael Cohen, former lawyer to Donald Trump, admitted to mistakenly providing his attorney with AI-generated citations that were later submitted in court.

Though Cohen and his lawyer avoided sanctions, the judge described the situation as “embarrassing.”

Other courts have taken disciplinary measures. In November, a Texas federal judge fined a lawyer $2,000 and mandated AI-related training after citing non-existent cases in a wrongful termination lawsuit.

More recently, a Minnesota federal judge criticized an expert witness for unintentionally using AI-generated misinformation, undermining his credibility in a case involving a “deepfake” parody of Vice President Kamala Harris.

AI Literacy in the Legal Field

Harry Surden, a law professor at the University of Colorado, believes these cases highlight a “lack of AI literacy” among lawyers rather than fundamental flaws in the technology itself.

“Lawyers have always made mistakes in their filings before AI,” Surden said. “This is not new.” He advises legal professionals to better understand AI’s strengths and limitations to avoid future pitfalls.

As AI integration in law continues to expand, the legal industry faces a critical challenge: balancing efficiency gains with ethical and professional responsibilities.

You might also like
Leave A Reply

Your email address will not be published.