OK, OK, so "AI" is not, per se, a word so much as it is an acronym for "Artificial Intelligence."
Great and with that out of the way, what is AI (or artificial intelligence)?
An overly complex definition of AI is: Artificial Intelligence (AI) is a multidisciplinary domain within computer science and cognitive science that involves the design, development, and analysis of computational systems capable of performing tasks traditionally requiring human cognitive processes such as perception, reasoning, learning, decision-making, and natural language understanding. It encompasses the creation of algorithms and models that enable machines to acquire representations of their environment, generalize from data, adapt to new information, and exhibit goal-directed behavior under varying conditions of uncertainty. AI draws on subfields including machine learning, knowledge representation, heuristic search, and robotics, leveraging statistical methods, neural architectures, and symbolic reasoning to enable autonomous or semi-autonomous systems to optimize actions in complex, dynamic environments while adhering to constraints defined by computational, ethical, and social considerations.
Got all that?
In more simplistic terms, AI is basically a fancy robot brain that tries to fake being smart so you don’t have to be.
That better?
Essentially, Artificial Intelligence is like building a mechanical apprentice that learns by watching, listening, and practicing, just as a human would, so it can help us carry out tasks.
For example: Imagine teaching a child to sort laundry by colors: you show them examples, correct mistakes, and eventually, they learn to do it on their own. AI works similarly, but instead of a child, it’s a computer system that learns from examples, patterns, and feedback so it can make decisions, recognize speech, translate languages, or drive a car.
It’s not truly “thinking” like a human, but it mimics parts of human learning and decision-making to help us do things faster, more consistently, and often on a much larger scale.
Still unclear how it works in "real" life?
Say you're looking to write draft a professional resume for a sales professional (selling cars) and where you only have a few key skills that might be useful and you've worked at McDonald's slinging burgers for the last few years.
AI can crank a really nice one page resume, based on those parameters, for you. Of course, I'd take the time to make small edits - but it will look sharp.
Maybe the resume you submitted above landed you an interview in front of 21 people. While only 4 people asked you questions, you still need to send a thank-you letter to all 21 people. I've done this and it took me 4 days to make each one a little different but relatable using notes I took during the interview(s).
AI can crank out those 21 unique and professional letters just based on their titles alone and do it in under 2 minutes flat and make you look like a superstar.
Maybe you're a lawyer and you need help with your lawyer stuff. How might AI help you?
- Streamlined Legal Research: AI can quickly analyze vast amounts of legal data, identify relevant precedents, and suggest potential arguments, saving lawyers significant time and effort.
- Automated Contract Review: AI tools can scan contracts for key clauses, potential risks, and inconsistencies, accelerating the review process and improving accuracy.
- Enhanced eDiscovery: AI can help manage and analyze large volumes of data during litigation, identifying relevant information more efficiently and reducing costs associated with discovery.
- Improved Risk Assessment: AI-powered tools can analyze historical case data and predict potential outcomes, enabling lawyers to better advise clients and mitigate risks.
- Drafting Legal Documents: AI can assist in drafting initial versions of motions, briefs, contracts, and other legal documents, saving time and improving consistency.
Say a lawyer uses an AI tool (like ChatGPT OR Microsoft Copilot OR Google Gemini OR Chatsonic OR Grok OR any AI legal assistant) to draft a legal document. The AI is asked:
“Provide cases supporting the argument that emotional distress damages are recoverable in breach of contract cases in Utah.”
The AI responds with:
“Yes, see Smith v. Jones, 456 P.3d 789 (Utah 2019), where the Utah Supreme Court held that emotional distress damages were recoverable in a breach of contract case.”
However:
-
Problem: Smith v. Jones does not exist. Well, it might exist somewhere but not with that citation or set of facts or holding or, even at all. In this example, the AI generated a citation that sounds real but is entirely fabricated (“hallucinated”), including a made-up volume, page number, and holding.
-
The AI pulled patterns from similar cases but created a false case to fit the prompt.
-
If the attorney includes this citation in a filed brief, Attorney could (and probably should) face serious court sanctions, reputational damage, and ethical violations under ABA Model Rule 1.1 (Competence) and Rule 3.3 (Candor Toward the Tribunal).
The attorneys asked ChatGPT if the cases were real, and ChatGPT falsely assured them they were, even providing fabricated excerpts.
Thing is, had they just Shepardized the cases, they would have discovered the discrepancies and avoided the penalties of being sanctioned with a $5,000 fine and ordered to notify the real judges falsely cited in their brief
In the second REAL case of Park v. Kim (N.Y. Sup. Ct., 2023) – (aka the "Second ChatGPT Sanction Case"), a lawyer in New York used ChatGPT to draft an opposition brief in a personal injury case.
The brief included false citations to non-existent cases. Opposing counsel flagged the citations as untraceable. The lawyer admitted to using ChatGPT without verifying the citations (i.e. he didn't Shepardize the cases).
In this second case, the court issued sanctions against the attorney and the lawyer was ordered to pay legal fees to opposing counsel and faced professional embarrassment (basically, he was laughed at all all future bar meetings).
Other cases where attorneys used A.I. to improperly draft legal documents (and were caught) include:
1) United States v. Hayes
Jurisdiction: U.S. District Court, Eastern District of California (2005)
-
What happened: A defense lawyer submitted a motion containing a fictitious case and quotation that appeared to be AI-generated. The court ordered the attorney to pay $1,500 and circulated the ruling to local bars and judges.
2) Butler Snow Attorneys (Disqualification Order)
Jurisdiction: U.S. District Court, Northern District of Alabama (2025)
What happened: Three attorneys from Butler Snow submitted filings with fabricated AI-generated citations in defending Alabama prison officials. The judge found the conduct improper, disqualified the lawyers from the case, and referred the matter to the Alabama State Bar.
3) Indiana Hallucination Citations (Ramirez)
Jurisdiction: U.S. District Court, Southern District of Indiana (2024-25)
-
What happened: In briefs for a case involving HoosierVac, an attorney filed multiple briefs with made-up AI-generated case citations. The magistrate judge recommended a $15,000 sanction and noted the lawyer failed to check the AI output.
4) Eastern District of Michigan — Sanctions for AI-Related Errors
Jurisdiction: U.S. District Court, Eastern District of Michigan (2025)
-
What happened: Plaintiffs’ counsel included in their responsive briefs real case names with fake quotes or misleading parentheticals that appeared to result from AI hallucinations. The court found Rule 11 violations and imposed monetary sanctions to deter future AI misuse.
5) Sanction (Southern District of Indiana — $6,000 Fine)
Jurisdiction: U.S. District Court, Southern District of Indiana (2025)
What happened: A federal judge fined an attorney $6,000 for filing briefs that included citations to nonexistent cases generated by an AI tool, emphasizing that such “hallucination cites” must be verified by counsel.
- What Happened: A bankruptcy court found plaintiff’s counsel used generative AI to “manufacture legal authority,” resulting in sanctions including fees, continuing legal ed., and referral to disciplinary counsel.
-
A bankruptcy court found that counsel filed a brief containing fabricated case citations generated by AI (e.g., In re Montoya, In re Jager, etc.) in a Chapter 13 proceeding.
The attorney admitted he used ChatGPT for legal arguments and did not verify the generated citations. The court held this violated Federal Rule of Bankruptcy Procedure 9011 and sanctioned the lawyer and firm with a $5,500 fine and required attendance at an AI education session.
8) Ford v. James Koutoulas & Lgbcoin, Ltd., No. 2:25‑cv‑23896‑BPY, 2025 U.S. Dist. LEXIS 234696 (M.D. Fla. Dec. 2, 2025)
-
What happened: In this federal case, the defendants’ summary judgment motion “contained several citations that the court and the plaintiffs suspected were GenAI hallucinations, where the court was unable to locate the cited authorities.”
In most of these cases, the attorneys faced stiff fines, humiliation at the hands of their peers and public, and some were referred to the State Bar for discipline.
What is key to note is that prior to 2023, there are no recorded instances where attorneys were caught improperly using AI.
Why?
Simply because the technology wasn't available until around 2023. Prior to late 2022, there was no generative AI (like ChatGPT) capable of producing case citations.
Earlier "AI" tools (like Westlaw's KeyCite or Lexis's Shepards) were search and analysis tools (meaning humans searched and analyzed when they got) - not generative drafting tools and neither KeyCite or Shepards produced hallucinated citations.
I suspect what happened is that law students and, subsequently, attorneys got lazy and stopped relying on their own efforts to draft legal documents expecting that computers would continue to be reliable and not churn out non-existant
citations.
The bottom line to all this here is that as great and helpful and fast as AI is, it is not 100% accurate.
Consequently, AI should never replace basic legal research practices (including cite checking using Shepards or Key Cite) or remove the human element (i.e. personally editing your own work).





