Given the speed that tech goes these days, this might seem a little behind the times but it's something that needs to be brought up again and again. See, back in 2024 (yeah, a lifetime ago), I started using Chatgpt.
Then around early 2025, as I started cranking out a whole lot of legal research and blog content, I became acquainted with holographic cases which quickly became synonymous with how Chatgpt and other LLMs (i.e. Large Language Model) treated attorneys and case law.
Before we get into the meat of this blog, let's define what I mean by Holographic case law and why it happens.
HOLOGRAPHIC CASE LAW is/are fictitious legal cases generated by a language model (LLM) that appear to be real, complete with
-
Case name
-
Citation / authority
-
Page numbers
-
Court jurisdiction
…but do not actually exist in any legal reporter or database.
So, picture it - you're researching a legal topic and you hop over to Chatgpt or Grok or Gemini or Claude or DeepSeek or Perplexity or any of the other few dozen LLMs in Internet-land, and you run a search on that legal topic. In the course of your searching you see references to several legal-esque cases.
Problem is that when you try to find those cases via the Internet or try to cite check the cases (like what any 1st year law student should have learned to do), you can't find those cases.
The reason you can't find the cases is because they don't exist.
Wait, doesn't exist?!
How or why is that even possible?!? Why might an LLM crank out a case that doesn't exist?
Funny thing, but I had that exact same question and I posed it to Chatgpt which responded with:
That’s a great question — and it’s one of the most important issues when using AI for legal research.
ChatGPT (and other LLMs) sometimes return cases that don’t exist because of the way large language models work:
-
Pattern Matching, Not Databases:
I don’t have a built-in legal case database. Instead, I was trained on lots of legal writing and I generate responses by predicting what looks like a plausible answer. If you ask for a case citation, I try to produce one that fits the pattern of a real case — correct reporter, volume, page — even if it’s made up.
-
Pressure to Be Helpful:
If you ask for “a case where X happened,” I try to fulfill that request. If I don’t have a known case in memory, I may invent one that sounds right rather than say “I don’t know,” especially if your prompt suggests that such a case must exist.
-
Legal Citations Are Predictable:
Because case citations follow a rigid format (e.g., Smith v. Jones, 123 F.3d 456 (9th Cir. 2020)), I can generate something that looks very convincing — even to experienced lawyers. But if you check the reporter, the case might not exist.
-
Contextual Merging (Hallucination):
Sometimes I blend real cases. For example, I might take facts from Case A and a holding from Case B and give it a citation that matches neither. This is not intentional — it’s a statistical artifact.
This problem is exactly what led to the high-profile Mata v. Avianca (S.D.N.Y. 2023) case — where lawyers used ChatGPT to write a brief, it cited six fake cases, and the court sanctioned them.
✅ Best Practice When Using AI for Legal Research:
-
Treat all AI-supplied case names and citations as leads, not authorities.
-
Verify every case in a trusted source (Westlaw, Lexis, Fastcase, Casetext, etc.) before using it.
-
If you can’t find it in an official reporter or database, assume it doesn’t exist.
While it's nice that Chatgpt admits that it screwed up, what's problematic is that it does so so nonchalantly - as if it's just another day in the rain and it's not problem and that so what, I screwed up - what you gonna do about it?!?
And that's the rub, isn't it? It's not that you can sue an LLM for returning holographic case law - or can you? I mean, here you are against a deadline and you run a search in an LLM for a "Motion for Summary Judgement with 4 cases on point" and you submit the results to the court only to discover at your hearing for sanctions that the 4 cases on point don't exist (because they're holographic).
Yeah, you could have cite checked the cases before you submitted your motion but you were against a deadline and what self-respecting attorney actually cite checks their work, right?!?
As it turns out, existing lawsuits against AI companies focus primarily on copyright infringement,
where authors, artists, and news organizations allege their protected
works were used without a license to train the LLMs. These cases are
distinct from those arising from "hallucinated" case law in legal
filings.
Ultimately, the consensus in the legal community is that the onus remains entirely on the human attorney to verify AI-generated work before it is submitted to a court.
I mean, it's a novel idea (to sue an LLM instead of cite checking your work before filing with the court) given these days that no one wants to take responsibility to screwing up because it's always someone elses fault, right?
You know, as this is a new year and a time for new resolutions, maybe this might be one of your resolutions - to stop blaming Chatgpt for everything wrong with the world.
Yes?....No?....can I at least get an Amen?
No comments:
Post a Comment