Monday, February 23, 2026

Testify!


I don't know if you know, but there are a whole lot of myths about what police can and can't (or shouldn't) do.

Take, for example these 10 (in no particular order) myths about police and their conduct with, around, or towards the general public:

MYTH 1: Police must always read you your Miranda rights when arresting you.
TRUTH: Miranda rights (Miranda v. Arizona, 384 U.S. 436 (1966)) are only required before questioning a suspect in custody.  If you’re arrested but not interrogated, officers don’t have to read you your rights.  Berkemer v. McCarty, 468 U.S. 420 (1984) clarified that Miranda applies to all custodial interrogations, including traffic stops IF they become custodial. 

MYTH 2: You have to answer all police questions.
TRUTH: You have the right to remain silent. You can (and should) say, “I’m exercising my right to remain silent” and “I want a lawyer.” Under Salinas v. Texas, 570 U.S. 178 (2013), silence before being read Miranda rights can be used against you unless you explicitly invoke the right to remain silent.  Whereas Edwards v. Arizona, 451 U.S. 477 (1981) notes that once a suspect asks for a lawyer, all questioning must stop until counsel is present.

MYTH 3: Police can’t lie to you.
TRUTH: They legally can lie during investigations or interrogations (e.g., “Your friend already confessed”).  However, lying on official reports or under oath is a crime.  Frazier v. Cupp, 394 U.S. 731 (1969) held that police deception during interrogation does not automatically make a confession involuntary.  Additionally, Oregon v. Mathiason, 429 U.S. 492 (1977) reinforced that voluntary stationhouse questioning, even if deceptive, doesn’t automatically require Miranda warnings.

MYTH 4: If you film the police, they can confiscate your phone or arrest you.
TRUTH: Recording police in public is protected under the First Amendment — as long as you don’t interfere with their duties.  They can’t legally delete, seize, or demand your footage without a warrant (though some still do).  Glik v. Cunniffe, 655 F.3d 78 (1st Cir. 2011) held that recording police in public is protected by the First Amendment and Fields v. City of Philadelphia, 862 F.3d 353 (3d Cir. 2017) reaffirmed citizens’ right to record police performing public duties.

MYTH 5: Police are legally required to protect you from harm.
TRUTH:  The Supreme Court has ruled multiple times (e.g., DeShaney v. Winnebago County, 489 U.S. 189 (1989)) that police have no constitutional duty to protect individuals, only the public at large.  Also, under Town of Castle Rock v. Gonzales, 545 U.S. 748 (2005) the court found that even with a restraining order, police are not constitutionally required to enforce protection.

MYTH 6: Police can offer you a deal to avoid charges.
TRUTH: Only prosecutors can make plea deals. Officers might suggest cooperation, but their “promises” aren’t legally binding. Under United States v. Goodwin, 457 U.S. 368 (1982), the court confirmed that prosecutorial discretion is broad but police cannot promise immunity or deals.

MYTH 7: You can’t sue police officers personally.
TRUTH: You can — but it’s very difficult due to qualified immunity, which protects officers from personal liability unless they violate “clearly established” rights.  Kisela v. Hughes, 138 S. Ct. 1148 (2018) Reinforced how broadly courts interpret qualified immunity in police conduct cases.

MYTH 8: Police can use deadly force whenever they feel threatened.
TRUTH:  Deadly force can only be used when a reasonable officer believes there’s an imminent threat of death or serious injury.  Excessive or retaliatory force violates the Fourth Amendment.  Kingsley v. Hendrickson, 576 U.S. 389 (2015) clarified the standard for excessive force claims by pretrial detainees and Scott v. Harris, 550 U.S. 372 (2007) authorized high-speed chase interventions (e.g., PIT maneuvers) when the suspect poses a significant threat to public safety.

MYTH 9: Police can search your car just because they want to.
TRUTH:  They generally need probable cause, your consent, or a warrant.  Examples of probable cause: visible contraband, smell of drugs, or other evidence in plain sight.  According to Payton v. New York, 445 U.S. 573 (1980), police cannot enter a home without a warrant to make a routine felony arrest (absent exigent circumstances). Also Brigham City v. Stuart, 547 U.S. 398 (2006) noted police may enter a home without a warrant to stop ongoing violence or render emergency aid.

MYTH 10: Resisting arrest is legal if the arrest is unlawful.
TRUTH:  Almost all states make resisting arrest illegal, even if the arrest was unjustified. You must challenge it later in court, not during the arrest.  United States v. Ferrone, 438 F.2d 381 (3d Cir. 1971) affirmed that resisting arrest is not justified, even if the arrest is unlawful.

If I may, and while we're on the subject of myths, I'd like to add another myth:  police can lawfully arrest you if you flip them off or swear at them. 

The reality is that courts have repeatedly held that verbally criticizing, cursing, or flipping off police officers is protected by the First Amendment, as long as you’re not making a true threat, inciting violence, or interfering with police duties

 

While profanity or rude gestures alone are protected, you could be arrested if your behavior crosses certain lines, such as:

  • “Fighting words” – Words likely to provoke immediate physical retaliation (though this standard is rarely met).

  • True threats – Saying something like “I’ll kill you” or “I’m going to attack you.”

  • Obstruction / Interference – If your yelling physically interferes with police performing their duties.

  • Disorderly conduct – If your words are combined with aggressive actions that disturb the peace (not merely causing offense).

Even though it’s illegal, officers sometimes still arrest people anyway, often under vague charges like “disorderly conduct” or “resisting arrest.”  While these charges often get dismissed later, the person still has to deal with handcuffs, court, and lawyer fees.

So, say you flipped a cop the bird and s/he arrested you.  What can you do about it?

If you’re arrested or cited only because you used profanity, insulted, or flipped off a cop, you can typically sue under 42 U.S.C. § 1983, a federal law that lets citizens sue government officials (like police officers) for violating constitutional rights.

You would be suing for:

  • Violation of your First Amendment rights — retaliation for protected speech; and

  • Violation of your Fourth Amendment rights — unlawful or retaliatory arrest without probable cause.

To win a § 1983 case for retaliatory arrest or unlawful arrest, you generally have to show:

  1. You engaged in protected speech (swearing or flipping off is protected).

  2. The officer took adverse action (e.g., arrest, detention, ticket).

  3. The officer’s action was motivated by your speech — that is, they arrested you because of what you said or did.

  4. There was no probable cause for the arrest (e.g., “disorderly conduct” was bogus).

So, do people actually win cases against the police if they are wrongfully arrested (particularly for swearing at a cop)?  

Well, if you can prove that the you engaged in protected speech, the officer took adverse action, that the officer's actions were motivated by your speech and there was no probable cause for the arrest, qualified immunity (the doctrine shielding officers in many cases) often won’t apply — because the courts have long made clear that arresting someone for rude but protected speech is unconstitutional.  

Following are several important and successful cases:

  • Duran v. City of Douglas, 904 F. 2d (9th Cir. 1990); Arizona

    • Duran flipped off and cursed at a police officer.

    • Officer stopped and arrested him for disorderly conduct.

    • Court ruled the officer violated Duran’s rights and denied qualified immunity — the officer could be personally liable.

  • Swartz v. Insogna, 704 F. 3d 105 (2nd Cir. 2013); New York

    • Swartz gave a cop the middle finger and was stopped.

    • Court said the gesture was protected, and the officer could be sued for unlawful stop and retaliation.

  • Wood v. Eubanks, 459 F. Supp. 3d (6th Cir. 2020); Ohio

    • Man cursed at police and was arrested for disorderly conduct.

    • Court ruled swearing at police is protected speech, the arrest was unlawful, and the officers were not immune from being sued.

  • Thurairajah v. City of Fort Smith, 925 F.3d 979 (8th Cir. 2019); Arkansas

    • A driver yelled “F--- you!” out of his window at a state trooper.

    • The trooper arrested him for disorderly conduct.

    • Court said the arrest violated the First Amendment, and the officer could be personally sued.

So, let's say you sue for being arrested.  What can you get out of it?  Well, IF you win, you can typically get:

  • Compensatory damages — for emotional distress, lost wages, or costs of arrest.

  • Punitive damages — if the officer acted maliciously or recklessly.

  • Attorney’s fees — under § 1988, courts often make the government pay your legal costs.

Examples:

  • In some cases, plaintiffs have received $20,000–$75,000 settlements for wrongful arrest or retaliation based solely on swearing or gestures.

  • A few received six-figure awards when the arrest was aggressive or caused serious consequences (e.g., job loss, jail time, humiliation).

While this has a "alls well that ends well" warm and fuzzy feel to it, do you really want to go through the hassle of flipping off a cop (as comforting as that may feel, sometimes), THEN getting arrested, THEN filing a lawsuit in federal court only to THEN hope you win and THEN go after the individual cop who arrested you only to discover that said cop doesn't have a pot to piss in?

That's a whole lot of if's and there is no guarantees that you'll win anything other than maybe the inner satisfaction that you were right all along. 

The core bottom line here is that lawsuits against police for merely swearing or making rude gestures tend to be unsuccessful unless accompanied by some deprivation of rights. While police departments risk financial payouts and judicial mandates, the payout is often not worth the hassle of litigation.  

So maybe just keep those phallic symbols gloved and out of sight.

Monday, February 16, 2026

How Do You Solve a Problem Like the FDA?

If there is one thing that really bugs me (these days) is how big and worthless government is.  Take, for example the FDA (aka Food and Drug administration).  

The FDA is a regulatory agency within the Department of Health and Human Services (HHS).  Historically, its job is/was to protect public health by ensuring that many products Americans use every day are safe, effective, and properly labeled.

Officially, the FDA regulates:

  • Prescription and over-the-counter drugs (human and veterinary).

  • Biologics (vaccines, blood products, gene therapies).

  • Medical devices (from pacemakers to tongue depressors).

  • Food safety (except most meat, poultry, and some egg products, which are regulated by USDA).

  • Cosmetics (ensuring products are safe and not mislabeled).

  • Tobacco products (authority granted in 2009).

  • Radiation-emitting products (like X-ray machines and microwaves).

The problem is that it's not doing a very good job of any of this.

When this is brought up, proponents of the FDA always point to the fact that they are underfunded and understaffed.  The problem is that no matter how much more money is budgeted or how many more people are hired, they always say that.  

In point of fact, for fiscal year 2025, the FDA's budget is/was a whopping $7.2 BILLION.  If the IRS can hire tens of thousands of employees, bringing its total workforce to over 100,000, then certainly the FDA can loosen the purse strings and get a few dozen more employees on its payroll.

I'm just sayin.

So, who actually runs the FDA?  Well,...... 

Congress created the FDA’s legal authority through statutes (mainly the Food, Drug, and Cosmetic Act of 1938 and its amendments).

The President / Executive Branch appoints the FDA Commissioner (who leads the agency), with Senate confirmation.

Finally, Federal Courts can review FDA decisions if challenged (e.g., whether FDA exceeded its statutory authority, or if its actions were arbitrary or capricious under the Administrative Procedure Act).

So, in summary, the FDA is not independent; it’s an Executive Branch agency under HHSCongress writes the laws that give FDA power.  The President (through HHS) appoints its leadership and shapes its policy direction and Courts can step in if FDA overreaches. 

Got all that? 

So, what's the problem with the FDA?  Mostly, the argument is that the FDA is too cozy with the industries it regulates and, consequently, fails to regulate food and drug safely.  While RFK, Jr. is working to change all this, it's still business as usual.

Um, so why is this important?

Well, all this coziness can lead to public health risks like unsafe drugs or contaminated food. In fact, the FDA's response to food safety crises is often too slow, and concerns exist about the use of unsafe agricultural chemicals, food additives, and processing techniques.  

So, wait, too slow?  How does this play out in real life?

Peanut Corporation of America Salmonella Outbreak (2008–2009)

  • What happened: Salmonella-contaminated peanut products caused over 700 illnesses and at least 9 deaths across 46 states.
  • FDA'sCriticism: The FDA had been aware of previous contamination problems at the plant but failed to act aggressively before the outbreak spread. Critics said inspections were too infrequent and reactive rather than preventive.

Listeria in Cantaloupes (Jensen Farms, 2011)

  • What happened: A deadly listeria outbreak linked to cantaloupes sickened 147 people and killed 33 across 28 states.
  • FDA's Criticism: The FDA only ramped up produce-safety rules after this crisis, even though experts had long warned about weak oversight of fresh produce. Stronger standards didn’t come until the Food Safety Modernization Act (FSMA) was implemented later.

Spinach E. coli Outbreak (2006)

  • What happened: Bagged spinach contaminated with E. coli O157:H7 caused 199 illnesses and at least 3 deaths in 26 states.
  • Criticism: The FDA had known leafy greens were high-risk but hadn’t mandated stricter safety practices before the outbreak. Afterward, it issued only voluntary guidelines for leafy greens until FSMA gave it stronger authority years later.

Salmonella in Eggs (2010)

  • What happened: Half a billion eggs were recalled after Salmonella contamination at two Iowa farms, sickening more than 1,900 people.
  • Criticism: The FDA had finalized egg safety rules in 2009 but had not begun routine inspections before the outbreak occurred. The slow rollout left the public vulnerable.

Infant Formula Shortage & Cronobacter Contamination (Abbott, 2021–2022)

  • What happened: Abbott’s Michigan plant, a major U.S. producer of baby formula, was linked to Cronobacter contamination after several infants became ill and at least two died.
  • Criticism: A whistleblower had alerted the FDA months before the crisis, but the agency took four months to inspect and act on the complaint. By the time it shut down the plant, the U.S. faced a nationwide infant formula shortage.

2023–2025 Raw Milk Salmonella Outbreak Reporting Delay

  • Incident: From September 2023 to March 2024, at least 171 people across five states fell ill from a Salmonella outbreak linked to raw (unpasteurized) milk.
  • Criticism: The outbreak was not publicly reported until July 2025, long after illnesses occurred—raising concerns about delayed disclosure and its impact on consumer awareness and safety.
2024 Boar’s Head Listeriosis Outbreak (Listeria in Deli Meats)
  • Incident: A widespread listeria outbreak tied to Boar’s Head liverwurst and deli meats affected individuals between May and November 2024, resulting in 60 hospitalizations and 10 deaths. The implicated plant had documented 69 violations, including mold, insects, and unsanitary conditions.
  • Criticism: Experts and media questioned why regulatory authorities, including the FDA, allowed the plant to continue operations amid persistent safety failures.
2024 McDonald’s E. coli Outbreak (Contaminated Onions)
  • Incident: From September to October 2024, 104 confirmed E. coli O157:H7 cases (including one death) were linked to slivered onions on McDonald’s Quarter Pounders in 14 states.
  • Criticism: The FDA’s public warning only came October 22, over a month into the outbreak, and traceback investigations took time to isolate the source at supplier level—with criticism over delayed disclosure and slow root-cause identification.
Suspension of Food Emergency Response Network (FERN) Testing Program (2025)
  • Issue: In early 2025, the FDA temporarily suspended its FERN Proficiency Testing Program, a critical system ensuring lab readiness for detecting contaminants in dairy and other products. 
  • Criticism: While the agency said safety tests continued at state and federal levels, experts expressed concern that suspending this program—without a clear timeline for resumption—undermines confidence in emergency response readiness.
Delaying Enforcement of Food Traceability Rule (FSMA Section 204)
  • Issue: The Food Traceability Rule—part of the Food Safety Modernization Act—had been in development for 14 years and was set to take effect in January 2026. In March 2025, the FDA announced a 30-month delay, pushing enforcement back significantly.
  • Criticism: This postponement delays critical traceability infrastructure meant to speed outbreak investigations and recalls, creating concern that potentially dangerous food items will remain more difficult to track when contamination occurs.

While some of these are active outbreaks and others are regulatory shifts or resource challenges, collectively they underscore a disturbing pattern - the FDA is unable to handle detecting, response, or preventing food (and other) safety threats.

So, how do you solve a problem like the FDA?  I don't know - I just report the stuff.  

Maybe, though, it's time to re-shuffle the deck and bring in some fresh talent.  Oh, wait, didn't they do that back in November 2024?  

It will certainly be interesting to see if the FDA can get it's act together now under new management.

Monday, February 9, 2026

Expecting a Beat Down?

You know, most of the things I post are pretty non-personal (meaning they don't happen to me, so much).  Today's post hit closer to home.

The other day I got a call from a guy I knew in a prior life.  Seems Guy was walking down a street next to a park and got stopped by police.  Seems buddy was wearing black pants, a black shirt, black...well, suffice it to say, he was in a black kind of mood in the middle of summer - aaaaaand which while incredibly stylish, it caught the attention of the local po po.

Apparently, and I'm spitballing here, the officer that stopped Guy didn't like his style of clothing (which really didn't match the season) and stopped him.  When Guy was not forthcoming with personal information as fast as officer liked, officer arrested Guy and charged him with obstructing with a police investigation, resisting arrest, assault, and a bunch of other stuff.

On a side note, I find it particularly funny that people get charged with resisting arrest.  I mean, who in blazes wants to be handcuffed and tossed in the back of a police car designed for people under 5 feet tall.  OK, I do know some people who like to be handcuffed but I don't know anyone who would willingly be trussed up only to be tossed in the back of a patrol car.

It boggles my mind.

Anyway, fast forward a bits and Guy gets released, ALL charges are dropped and he's now filing a lawsuit against Officer for violating his civil rights under 42 USC § 1983.

So, I got to thinking what do people do to get targeted by police?  I mean, wouldn't you want to know so you don't get stopped just because?

Turns out there are a number of factors that police are looking for, like:

1. Gang-Affiliated Colors or Symbols

  • Bright single-color outfits (e.g., all-red, all-blue, all-black in some cities)

  • Sports team gear linked to local gangs (e.g., LA Dodgers caps, Chicago Bulls jackets in certain neighborhoods)

  • Bandanas in specific colors tied to known gangs

  • Risk: In some regions, these colors are unofficial “flags” for gangs, and police may use them in gang injunction enforcement.

2. Bulky or Concealment-Heavy Clothing (Especially Off-Season)

  • Hoodies with the hood up on warm days

  • Puffy jackets in warm weather

  • Baggy cargo pants with oversized pockets

  • Risk: Can be interpreted as attempting to conceal weapons, drugs, or stolen items.

3. Face Coverings and Masks (Outside of Health Contexts)

  • Ski masks, balaclavas, or full face bandanas

  • Pulling a hoodie string tight over the face

  • Risk: May be treated as “masking” in preparation for theft or robbery.

4. Tactical, Military, or “Cop-Like” Gear

  • Tactical vests, camouflage pants, combat boots

  • Duty belts with empty holsters or MOLLE pouches

  • Risk: Can signal militia or armed group affiliation, which may prompt a stop.

5. “Suspicious” Layering

  • Wearing multiple shirts or jackets (common in shoplifting to conceal goods)

  • Heavy coats paired with shorts (temperature mismatch)

  • Risk: Seen as potentially hiding items or preparing for quick outfit changes.

6. Motorcycle Club Colors or Insignia

  • Leather vests with patches for known MCs (“1%” patches, skull insignias)

  • Large rocker patches identifying an MC and territory

  • Risk: Linked to outlaw biker groups under law enforcement surveillance.

7. Costumes or Disguises in Non-Holiday Contexts

  • Wigs, theatrical makeup, Halloween masks out of season

  • Risk: Interpreted as intent to conceal identity during a crime.

In summary, 

  • Neutral colors & patterns — avoid solid bright red/blue in gang-heavy areas.

  • Dress season-appropriate — match clothing to the weather.

  • Avoid obvious gang/military insignia — unless you’re in a clearly legitimate setting.

  • Limit full face coverage — when not required for health or safety.

  • Blend with the environment — if others in the area are in casual wear, match the tone.

While they probably won't admit it, apart from clothing, there are several other factors police use to profile people.

Behavioral Profiles

  • Nervousness, avoiding eye contact, or suspicious movements (e.g., repeatedly looking around, hiding hands).

  • Loitering in unusual places or for long periods without apparent reason.

  • Trying to avoid police presence or walking away quickly.

  • Acting unusually at a gas station, like frequently changing vehicles or handling items suspiciously.

Appearance Profiles

  • Clothing associated with gangs or certain subcultures (e.g., colors, symbols).

  • Wearing baggy clothing or concealing items.

  • Unkempt appearance, which officers may associate with homelessness or drug use.

  • Age and gender stereotypes, e.g., young males are more frequently stopped.

Location-Based Profiles

  • Being in high-crime neighborhoods or “hot spots” known for drug activity or violence.

  • Presence at locations with a history of illegal activity, like certain gas stations or street corners.

  • Being in a vehicle that matches descriptions from recent crimes.

Vehicle Profiles

  • Vehicles reported stolen or involved in crimes.

  • Older models or cars with missing or altered license plates.

  • Vehicles frequently seen in high-crime areas.

  • Drivers exhibiting erratic driving behavior (speeding, swerving).

Known Associations

  • Individuals who have prior arrests or warrants.

  • Being with known suspects or associates.

  • Matching descriptions broadcasted via radio or alerts.

So, let's say you're wearing something that police don't like and you're about to be pulled over or otherwise harassed by the police.  What can you do to minimize the damage coming your way?

1. Stay Calm and Composed

  • Take deep breaths, keep your voice steady and polite.

  • Avoid shouting, arguing, or aggressive gestures.

2. Follow Lawful Instructions

  • Comply with clear, lawful commands (e.g., show ID, put your hands where they can see).

  • Ask calmly if you don’t understand an order instead of resisting.

3. Keep Your Hands Visible

  • Place hands on the steering wheel or in plain sight.

  • Don’t make sudden movements or reach into pockets without saying so.

4. Avoid Physical Resistance

  • Resisting arrest or struggling increases the chance of force.

  • If you disagree with the arrest, contest it later legally.

5. Use Your Words to De-Escalate

  • Say things like “I’m trying to cooperate” or “Please don’t hurt me.”

  • Avoid profanity or insults.

6. Record the Encounter if Safe

  • Use your phone or a dash cam to document.

  • Let officers know you are recording if it’s safe to do so.

7. Know Your Rights but Stay Safe

  • You have the right to remain silent and the right to an attorney.

  • Exercising your rights calmly is better than physical confrontation.

8. Seek Witnesses

  • If others are nearby, ask them to watch and record.

  • Witnesses can deter excessive force.

Bottom line, when confronted by police, don’t try to fight back physically during the incident because billy clubs hurt.  If you do get a beat down, make sure you get medical help ASAP and document everything.  Finally, report any abuse to internal affairs and consider civil/federal legal action 

Actually, you should probably consider litigation a foregone conclusion. 

I'm just sayin. 

Monday, February 2, 2026

Word of the Month for February 2026: AI

 

OK, OK, so "AI" is not, per se, a word so much as it is an acronym for "Artificial Intelligence."

Great and with that out of the way, what is AI (or artificial intelligence)? 

An overly complex definition of AI is: Artificial Intelligence (AI) is a multidisciplinary domain within computer science and cognitive science that involves the design, development, and analysis of computational systems capable of performing tasks traditionally requiring human cognitive processes such as perception, reasoning, learning, decision-making, and natural language understanding. It encompasses the creation of algorithms and models that enable machines to acquire representations of their environment, generalize from data, adapt to new information, and exhibit goal-directed behavior under varying conditions of uncertainty. AI draws on subfields including machine learning, knowledge representation, heuristic search, and robotics, leveraging statistical methods, neural architectures, and symbolic reasoning to enable autonomous or semi-autonomous systems to optimize actions in complex, dynamic environments while adhering to constraints defined by computational, ethical, and social considerations.

Got all that?

In more simplistic terms, AI is basically a fancy robot brain that tries to fake being smart so you don’t have to be.

That better?

Essentially, Artificial Intelligence is like building a mechanical apprentice that learns by watching, listening, and practicing, just as a human would, so it can help us carry out tasks.

For example:  Imagine teaching a child to sort laundry by colors: you show them examples, correct mistakes, and eventually, they learn to do it on their own.  AI works similarly, but instead of a child, it’s a computer system that learns from examples, patterns, and feedback so it can make decisions, recognize speech, translate languages, or drive a car.

It’s not truly “thinking” like a human, but it mimics parts of human learning and decision-making to help us do things faster, more consistently, and often on a much larger scale.

Still unclear how it works in "real" life?  

Say you're looking to write draft a professional resume for a sales professional (selling cars) and where you only have a few key skills that might be useful and you've worked at McDonald's slinging burgers for the last few years.

AI can crank a really nice one page resume, based on those parameters, for you.  Of course, I'd take the time to make small edits - but it will look sharp.

Maybe the resume you submitted above landed you an interview in front of 21 people.  While only 4 people asked you questions, you still need to send a thank-you letter to all 21 people.  I've done this and it took me 4 days to make each one a little different but relatable using notes I took during the interview(s).

AI can crank out those 21 unique and professional letters just based on their titles alone and do it in under 2 minutes flat and make you look like a superstar.

Maybe you're a lawyer and you need help with your lawyer stuff.  How might AI help you?

  • Streamlined Legal Research:  AI can quickly analyze vast amounts of legal data, identify relevant precedents, and suggest potential arguments, saving lawyers significant time and effort. 
  • Automated Contract Review:  AI tools can scan contracts for key clauses, potential risks, and inconsistencies, accelerating the review process and improving accuracy.
  • Enhanced eDiscovery:  AI can help manage and analyze large volumes of data during litigation, identifying relevant information more efficiently and reducing costs associated with discovery. 
  • Improved Risk Assessment:  AI-powered tools can analyze historical case data and predict potential outcomes, enabling lawyers to better advise clients and mitigate risks.
  • Drafting Legal Documents:  AI can assist in drafting initial versions of motions, briefs, contracts, and other legal documents, saving time and improving consistency.
 Now, as all of that looks great, a HUGE drawback with using AI (particularly in law) is that AI tools, particularly generative AI, can sometimes produce inaccurate or fabricated information (hallucinations), requiring careful human review.
 
Wait, what?
 
Yeah.  There are a plethora of examples where lawyers used artificial intelligence search engines to find cases or even write whole briefs only to find out later that the cases cited therein don't exist.
 
For example:  

Say a lawyer uses an AI tool (like ChatGPT OR Microsoft Copilot OR Google Gemini OR Chatsonic OR Grok OR any AI legal assistant) to draft a legal document. The AI is asked:

“Provide cases supporting the argument that emotional distress damages are recoverable in breach of contract cases in Utah.”

The AI responds with:

“Yes, see Smith v. Jones, 456 P.3d 789 (Utah 2019), where the Utah Supreme Court held that emotional distress damages were recoverable in a breach of contract case.”

However:

  • Problem: Smith v. Jones does not exist.  Well, it might exist somewhere but not with that citation or set of facts or holding or, even at all.  In this example, the AI generated a citation that sounds real but is entirely fabricated (“hallucinated”), including a made-up volume, page number, and holding.

  • The AI pulled patterns from similar cases but created a false case to fit the prompt.

  • If the attorney includes this citation in a filed brief, Attorney could (and probably should) face serious court sanctions, reputational damage, and ethical violations under ABA Model Rule 1.1 (Competence) and Rule 3.3 (Candor Toward the Tribunal).

Can you say oops?  

Two real-world examples of attorneys using hallucinated cases from AI engines include Mata v. Avianca, Inc. (S.D.N.Y., 2023) (aka the “ChatGPT Case”).  I know I've already blogged about this case in an earlier post but it's fun to talk about this stuff and these guys were really reckless.
 
In this case, attorneys Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman used ChatGPT to draft a brief in a personal injury case against Avianca Airlines.
 
The brief included six non-existent cases generated by ChatGPT, such as: Varghese v. China Southern Airlines, Martinez v. Delta Airlines, and Miller v. United Airlines.

The attorneys asked ChatGPT if the cases were real, and ChatGPT falsely assured them they were, even providing fabricated excerpts.

Thing is, had they just Shepardized the cases, they would have discovered the discrepancies and avoided the penalties of being sanctioned with a $5,000 fine and ordered to notify the real judges falsely cited in their brief

In the second REAL case of Park v. Kim (N.Y. Sup. Ct., 2023) – (aka the "Second ChatGPT Sanction Case"), a lawyer in New York used ChatGPT to draft an opposition brief in a personal injury case.  

The brief included false citations to non-existent cases.  Opposing counsel flagged the citations as untraceable. The lawyer admitted to using ChatGPT without verifying the citations (i.e. he didn't Shepardize the cases).

In this second case, the court issued sanctions against the attorney and the lawyer was ordered to pay legal fees to opposing counsel and faced professional embarrassment (basically, he was laughed at all all future bar meetings).

Other cases where attorneys used A.I. to improperly draft legal documents (and were caught) include:

1) United States v. Hayes

  • Jurisdiction: U.S. District Court, Eastern District of California (2005)

  • What happened: A defense lawyer submitted a motion containing a fictitious case and quotation that appeared to be AI-generated. The court ordered the attorney to pay $1,500 and circulated the ruling to local bars and judges.

2) Butler Snow Attorneys (Disqualification Order)

  • Jurisdiction: U.S. District Court, Northern District of Alabama (2025)

  • What happened: Three attorneys from Butler Snow submitted filings with fabricated AI-generated citations in defending Alabama prison officials. The judge found the conduct improper, disqualified the lawyers from the case, and referred the matter to the Alabama State Bar.

3) Indiana Hallucination Citations (Ramirez)

  • Jurisdiction: U.S. District Court, Southern District of Indiana (2024-25)

  • What happened: In briefs for a case involving HoosierVac, an attorney filed multiple briefs with made-up AI-generated case citations. The magistrate judge recommended a $15,000 sanction and noted the lawyer failed to check the AI output.

4) Eastern District of Michigan — Sanctions for AI-Related Errors

  • Jurisdiction: U.S. District Court, Eastern District of Michigan (2025)

  • What happened: Plaintiffs’ counsel included in their responsive briefs real case names with fake quotes or misleading parentheticals that appeared to result from AI hallucinations. The court found Rule 11 violations and imposed monetary sanctions to deter future AI misuse.

5) Sanction (Southern District of Indiana — $6,000 Fine)

  • Jurisdiction: U.S. District Court, Southern District of Indiana (2025)

  • What happened: A federal judge fined an attorney $6,000 for filing briefs that included citations to nonexistent cases generated by an AI tool, emphasizing that such “hallucination cites” must be verified by counsel.  

6) In re Kheir (Bankr. S.D. Tex. 2025) 
  • What Happened: A bankruptcy court found plaintiff’s counsel used generative AI to “manufacture legal authority,” resulting in sanctions including fees, continuing legal ed., and referral to disciplinary counsel.
 
7) In re Marla C. Martin — U.S. Bankruptcy Court, N.D. Ill. (2025)
  • A bankruptcy court found that counsel filed a brief containing fabricated case citations generated by AI (e.g., In re Montoya, In re Jager, etc.) in a Chapter 13 proceeding.

  • The attorney admitted he used ChatGPT for legal arguments and did not verify the generated citations.  The court held this violated Federal Rule of Bankruptcy Procedure 9011 and sanctioned the lawyer and firm with a $5,500 fine and required attendance at an AI education session.

8) Ford v. James Koutoulas & Lgbcoin, Ltd., No. 2:25‑cv‑23896‑BPY, 2025 U.S. Dist. LEXIS 234696 (M.D. Fla. Dec. 2, 2025)

  • What happened: In this federal case, the defendants’ summary judgment motion “contained several citations that the court and the plaintiffs suspected were GenAI hallucinations, where the court was unable to locate the cited authorities.”

In most of these cases, the attorneys faced stiff fines, humiliation at the hands of their peers and public, and some were referred to the State Bar for discipline.  

What is key to note is that prior to 2023, there are no recorded instances where attorneys were caught improperly using AI.

Why?

Simply because the technology wasn't available until around 2023.  Prior to late 2022, there was no generative AI (like ChatGPT) capable of producing case citations.  

Earlier "AI" tools (like Westlaw's KeyCite or Lexis's Shepards) were search and analysis tools (meaning humans searched and analyzed when they got) - not generative drafting tools and neither KeyCite or Shepards produced hallucinated citations.

I suspect what happened is that law students and, subsequently, attorneys got lazy and stopped relying on their own efforts to draft legal documents expecting that computers would continue to be reliable and not churn out non-existant
citations
.  

Deceitful AI
Who knew people would program AI search engines to be deceitful (because algorithms are only as trustworthy as the people who programmed their parameters).

The bottom line to all this here is that as great and helpful and fast as AI is, it is not 100% accurate.  

Consequently, AI should never replace basic legal research practices (including cite checking using Shepards or Key Cite) or remove the human element (i.e. personally editing your own work).