AI “hallucinations” in legal filings — a new challenge for courts

Artificial intelligence is rapidly entering legal practice. Lawyers now routinely use generative AI tools to summarise cases, draft submissions, and assist with legal research. But alongside these efficiencies, courts around the world are confronting an unexpected problem: AI-generated legal arguments that contain fabricated case law.

This phenomenon is often referred to as an AI “hallucination.” In simple terms, the technology produces information that appears authoritative but is entirely invented. In the legal context, this can mean non-existent cases, fabricated quotations from judgments, or citations that simply do not exist in law reports.

For the judiciary, this issue strikes at something fundamental: the reliability of legal authority.

When AI Invents the Law

Several courts have already encountered filings where lawyers relied on generative AI tools to assist in drafting submissions, only to discover that the technology had invented legal authorities.

In one widely reported incident, lawyers were fined after submitting filings containing AI-generated case citations that did not exist, highlighting the risks of relying on automated research without verification.

Another ruling found that 12 of 19 cases cited in a brief were fabricated or unsupported, a pattern consistent with AI-generated hallucinations.

These incidents are no longer isolated. Legal researchers tracking the phenomenon have documented dozens of cases where AI-generated legal research introduced fictitious authorities into court filings, leading to judicial reprimands, fines, and professional disciplinary action.

Even senior lawyers are not immune. In Australia, a King’s Counsel apologised to a judge after submissions in a criminal case contained fabricated quotes and non-existent case citations generated by AI, delaying the proceedings.

Judicial Responses: Verification Is Non-Negotiable

Courts have reacted strongly.

Judges have emphasised that professional obligations apply regardless of how legal research is produced. Whether a lawyer relies on a junior clerk, a research assistant, or an AI system, the responsibility to verify authorities remains unchanged.

In one federal case, a judge fined lawyers thousands of dollars for filing documents containing AI-generated fictitious case law, stressing that attorneys have an ethical duty to ensure the accuracy of all court submissions.

Other courts have gone further. Some judges have disqualified lawyers from cases or referred them to disciplinary authorities after discovering fabricated AI citations in pleadings.

The emerging judicial consensus is clear:

  • AI cannot be relied upon as an authoritative legal source.

  • Lawyers must independently verify every citation.

  • Courts must remain vigilant when reviewing AI-assisted submissions.

These responses reflect a broader concern: the integrity of the justice system itself.

Why This Matters for the Bench

Legal reasoning depends on precedent. When a court cites authority, there is an implicit assumption that the underlying case law exists and has been accurately represented.

AI hallucinations undermine that assumption.

If fabricated citations appear in submissions, judges may be forced to spend additional time verifying authorities, correcting the record, or addressing professional misconduct. In extreme cases, proceedings can be delayed or compromised.

Beyond procedural inconvenience, the issue raises deeper institutional concerns.

The authority of the courts depends on the credibility of legal sources. If automated systems introduce false authorities into legal argument, the reliability of legal reasoning itself may be called into question.

New Questions for the Judiciary

The rise of AI-assisted legal drafting is forcing courts to confront several practical and ethical questions:

Should lawyers be required to disclose when AI tools were used in preparing submissions?

Some courts and law firms are already considering disclosure requirements to ensure transparency when generative AI has contributed to legal filings.

Should courts develop formal AI guidelines?

Bar associations and courts are beginning to issue guidance on responsible AI use in legal practice, emphasising verification, competence, and professional accountability.

What responsibility do judges have to scrutinise AI-assisted arguments?

While courts traditionally rely on counsel to present accurate authorities, the emergence of AI hallucinations may require greater judicial vigilance when unfamiliar citations appear in submissions.

A Technology Problem — or a Human One?

It would be easy to frame this issue as a technological failure. But in many cases, courts have emphasised that the underlying problem is not the technology itself, but the human failure to verify it.

Generative AI systems are designed to predict plausible language, not to guarantee factual accuracy. Studies have shown that large language models can generate incorrect or fabricated legal information when asked specific legal questions.

For the judiciary, the lesson is less about banning AI and more about reinforcing traditional professional standards.

Technology may change the tools lawyers use, but it does not change the fundamentals of legal practice:

  • verify the law

  • cite real authority

  • and ensure that every submission to the court is accurate.

The Road Ahead

Artificial intelligence will almost certainly remain part of the legal profession. Properly used, it has the potential to improve efficiency, assist with research, and expand access to legal information.

But the early wave of AI hallucination cases serves as a reminder that the law cannot be automated without oversight.

For judges, the challenge is not simply technological. It is institutional.

The courts must determine how to harness the benefits of new tools while protecting the reliability of legal reasoning — the foundation on which judicial authority ultimately rests.

Sources:

Reuters — Judge fines lawyers over AI-generated submissions in patent case
https://www.reuters.com/legal/litigation/judge-fines-lawyers-12000-over-ai-generated-submissions-patent-case-2026-02-03/

Reuters — Judge disqualifies attorneys after AI citations in court filing
https://www.reuters.com/legal/government/judge-disqualifies-three-butler-snow-attorneys-case-over-ai-citations-2025-07-24/

Washington Post — Judges warn lawyers about AI hallucinations in legal filings
https://www.washingtonpost.com/nation/2025/06/03/attorneys-court-ai-hallucinations-judges/

ABC News Australia — Lawyer apologises after AI-generated court submissions contained false cases
https://www.abc.net.au/news/2025-08-15/victoria-lawyer-apologises-after-ai-generated-submissions/105661208

Cronkite News — Lawyers face consequences for AI hallucinations in court filings
https://cronkitenews.azpbs.org/2025/10/28/lawyers-ai-hallucinations-chatgpt/

JD Supra — AI hallucinations in court: a wake-up call for the legal profession
https://www.jdsupra.com/legalnews/ai-hallucinations-in-court-a-wake-up-4503661/

Stanford Institute for Human-Centered AI — Legal AI models hallucinate in a significant share of queries
https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries

Member Login
Welcome, (First Name)!

Forgot? Show
Log In
Enter Member Area
My Profile Log Out