1. News
  2. AI
  3. Lawyers Face Backlash Over AI-Generated Hallucinations

Lawyers Face Backlash Over AI-Generated Hallucinations

featured
Share

Share This Post

or copy the link

Recently, numerous reports have emerged regarding attorneys facing repercussions for including “bogus AI-generated research” in their legal filings. The scenarios are varied, but the common thread is that lawyers have turned to large language models (LLMs), such as ChatGPT, for assistance with legal research or document drafting. Unfortunately, these models occasionally produce non-existent cases, leaving legal professionals unaware of the errors until challenged by a judge or opposing counsel. In some instances, such as a 2023 aviation case, lawyers have faced fines for submitting documents containing these AI-generated inaccuracies. This raises the question: why do they continue this practice?

The explanation primarily lies in the pressures of tight deadlines, compounded by AI’s infiltration into various professions. Legal research platforms like LexisNexis and Westlaw now feature AI capabilities. For lawyers managing extensive caseloads, AI can present an alluringly efficient solution. While many do not use ChatGPT directly for drafting, utilization of LLMs for research is on the rise. However, a significant number of these lawyers remain unclear about the functions and limitations of LLMs. One lawyer, who faced sanctions in 2023, initially thought of ChatGPT as merely a “super search engine,” only to realize it operates more like a random text generator, capable of producing both accurate and misleading information.

Andrew Perlman, the dean of Suffolk University Law School, argues that many attorneys use AI tools effectively. Those who are caught submitting flawed citations are the exceptions rather than the rule. “While the challenges of hallucination are real and require careful attention from lawyers, it does not overshadow the significant benefits that AI can bring to the delivery of legal services,” Perlman stated. Major legal databases and research tools like Westlaw are integrating AI functionalities, enhancing their value for practitioners.

A recent survey by Thomson Reuters highlighted that 63% of lawyers reported having utilized AI at some point, with 12% indicating regular use. Many respondents noted employing AI for tasks such as summarizing case law and researching statutes or sample language. Lawyers view AI as a tool for saving time, with half prioritizing the exploration of AI implementation in their practices. One attorney remarked that the essence of a good lawyer lies in being a “trusted advisor” rather than a mere document creator.

Nonetheless, numerous cases illustrate that AI-generated documents can contain inaccuracies or entirely fabricated information.

Related

  • Judge reprimands lawyers for ‘bogus AI-generated research’
  • Judge criticizes AI entrepreneur for court use of generated ‘lawyer’
  • ChatGPT continues to struggle as legal assistant.

In a notable recent case involving journalist Tim Burke, who was arrested for publishing unaired Fox News footage, his attorneys submitted a motion to dismiss the charges based on First Amendment principles. Upon reviewing the motion, Judge Kathryn Kimball Mizelle of Florida’s middle district found it contained “significant misrepresentations and misquotations of supposedly pertinent case law and history.” Ultimately, she ordered the document to be stricken from the record after identifying nine fabricated citations.

Despite the errors, Judge Mizelle permitted Burke’s legal team, led by attorneys Mark Rasch and Michael Maddux, to submit a revised motion. In a subsequent filing, Rasch took full responsibility, stating that he had used ChatGPT’s “deep research” feature along with the AI tools available through Westlaw, which have shown inconsistent performance.

Rasch is not the only lawyer to encounter issues with AI. Attorneys representing Anthropic acknowledged using Claude AI to assist in drafting an expert witness declaration for a copyright infringement lawsuit. Their submission mistakenly included an inaccurately titled citation and incorrectly attributed authors. Additionally, misinformation expert Jeff Hancock admitted to relying on ChatGPT to organize citations in support of a Minnesota law against deepfakes, resulting in multiple citation errors.

The significance of these documents is underscored by judges. A California judge overseeing a case against State Farm found that cited case law was entirely fabricated, stating, “I read their brief, was persuaded by the authorities cited, but upon verification discovered they didn’t exist.”

Perlman suggests that lawyers can utilize generative AI in safer ways, such as scanning large volumes of discovery materials, reviewing briefs, and brainstorming arguments. He emphasized that while generative AI can enhance lawyers’ efficiency, it should not replace their judgment or expertise.

However, reliance on AI necessitates diligent verification processes. Perlman noted that time constraints can lead attorneys to overlook citation accuracy, a challenge predating the rise of LLMs. “Even before generative AI, lawyers sometimes submitted citations that inadequately addressed the issues at hand,” he noted. The influx of AI has merely introduced a different layer to this ongoing problem.

Adding to the issue is a prevalent overreliance on AI outputs. Many professionals, lawyers included, may feel misled by the seemingly polished results produced by AI systems. Perlman remarked that users often become complacent, erroneously believing the outputs are inherently reliable.

Alexander Kolodin, an election lawyer and Republican state representative in Arizona, illustrates a practical application of AI in legislation drafting. In 2024, he incorporated AI-generated text when formulating a bill regarding deepfakes, using the LLM for initial definitions while ensuring he added necessary human protections related to human rights. Kolodin stated that he did not fully disclose his use of ChatGPT to his colleagues, aiming for an element of surprise in the bill, which ultimately became law.

Kolodin, who previously faced sanctions for his involvement in legal challenges to the 2020 election results, has used ChatGPT for drafting amendments and legal research. He emphasized the need for citation verification, stating that just as one would not send a junior associate’s work product unreviewed, the same diligence should apply to AI-generated work.

Kolodin employs both ChatGPT’s advanced “deep research” capabilities and the AI tools available through LexisNexis. He claims that LexisNexis has a higher incidence of inaccuracies than ChatGPT, which he views as having demonstrated improvement in reliability over the past year.

AI’s integration into the legal profession has progressed to the point where in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs.

According to this guidance, lawyers are expected to maintain a standard of competence, which includes staying informed about the technological landscape of AI. Lawyers are advised to cultivate an understanding of the advantages and risks presented by generative AI tools, rather than assuming these models function flawlessly. The guidance also highlights the importance of assessing the confidentiality implications when inputting sensitive case information into AI systems and suggests that attorneys inform their clients about their use of these technologies.

Perlman expresses optimism regarding the future of AI in the legal field, predicting it will become a transformative force within the profession. “I believe generative AI will prove to be the most influential technology the legal industry has encountered, and I foresee a shift where concern will pivot from the competence of lawyers utilizing these tools to the competence of those who choose not to,” he stated.

Conversely, others, including judges who have reprimanded lawyers for AI-related mistakes, remain skeptical. Judge Michael Wilner remarked, “Even with recent advancements, no competent attorney should rely solely on this technology for research and writing, particularly without verifying the information presented.”

Lawyers Face Backlash Over AI-Generated Hallucinations
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!