Monday, February 9, 2026

Expecting a Beat Down?

You know, most of the things I post are pretty non-personal (meaning they don't happen to me, so much).  Today's post hit closer to home.

The other day I got a call from a guy I knew in a prior life.  Seems Guy was walking down a street next to a park and got stopped by police.  Seems buddy was wearing black pants, a black shirt, black...well, suffice it to say, he was in a black kind of mood in the middle of summer - aaaaaand which while incredibly stylish, it caught the attention of the local po po.

Apparently, and I'm spitballing here, the officer that stopped Guy didn't like his style of clothing (which really didn't match the season) and stopped him.  When Guy was not forthcoming with personal information as fast as officer liked, officer arrested Guy and charged him with obstructing with a police investigation, resisting arrest, assault, and a bunch of other stuff.

On a side note, I find it particularly funny that people get charged with resisting arrest.  I mean, who in blazes wants to be handcuffed and tossed in the back of a police car designed for people under 5 feet tall.  OK, I do know some people who like to be handcuffed but I don't know anyone who would willingly be trussed up only to be tossed in the back of a patrol car.

It boggles my mind.

Anyway, fast forward a bits and Guy gets released, ALL charges are dropped and he's now filing a lawsuit against Officer for violating his civil rights under 42 USC § 1983.

So, I got to thinking what do people do to get targeted by police?  I mean, wouldn't you want to know so you don't get stopped just because?

Turns out there are a number of factors that police are looking for, like:

1. Gang-Affiliated Colors or Symbols

  • Bright single-color outfits (e.g., all-red, all-blue, all-black in some cities)

  • Sports team gear linked to local gangs (e.g., LA Dodgers caps, Chicago Bulls jackets in certain neighborhoods)

  • Bandanas in specific colors tied to known gangs

  • Risk: In some regions, these colors are unofficial “flags” for gangs, and police may use them in gang injunction enforcement.

2. Bulky or Concealment-Heavy Clothing (Especially Off-Season)

  • Hoodies with the hood up on warm days

  • Puffy jackets in warm weather

  • Baggy cargo pants with oversized pockets

  • Risk: Can be interpreted as attempting to conceal weapons, drugs, or stolen items.

3. Face Coverings and Masks (Outside of Health Contexts)

  • Ski masks, balaclavas, or full face bandanas

  • Pulling a hoodie string tight over the face

  • Risk: May be treated as “masking” in preparation for theft or robbery.

4. Tactical, Military, or “Cop-Like” Gear

  • Tactical vests, camouflage pants, combat boots

  • Duty belts with empty holsters or MOLLE pouches

  • Risk: Can signal militia or armed group affiliation, which may prompt a stop.

5. “Suspicious” Layering

  • Wearing multiple shirts or jackets (common in shoplifting to conceal goods)

  • Heavy coats paired with shorts (temperature mismatch)

  • Risk: Seen as potentially hiding items or preparing for quick outfit changes.

6. Motorcycle Club Colors or Insignia

  • Leather vests with patches for known MCs (“1%” patches, skull insignias)

  • Large rocker patches identifying an MC and territory

  • Risk: Linked to outlaw biker groups under law enforcement surveillance.

7. Costumes or Disguises in Non-Holiday Contexts

  • Wigs, theatrical makeup, Halloween masks out of season

  • Risk: Interpreted as intent to conceal identity during a crime.

In summary, 

  • Neutral colors & patterns — avoid solid bright red/blue in gang-heavy areas.

  • Dress season-appropriate — match clothing to the weather.

  • Avoid obvious gang/military insignia — unless you’re in a clearly legitimate setting.

  • Limit full face coverage — when not required for health or safety.

  • Blend with the environment — if others in the area are in casual wear, match the tone.

While they probably won't admit it, apart from clothing, there are several other factors police use to profile people.

Behavioral Profiles

  • Nervousness, avoiding eye contact, or suspicious movements (e.g., repeatedly looking around, hiding hands).

  • Loitering in unusual places or for long periods without apparent reason.

  • Trying to avoid police presence or walking away quickly.

  • Acting unusually at a gas station, like frequently changing vehicles or handling items suspiciously.

Appearance Profiles

  • Clothing associated with gangs or certain subcultures (e.g., colors, symbols).

  • Wearing baggy clothing or concealing items.

  • Unkempt appearance, which officers may associate with homelessness or drug use.

  • Age and gender stereotypes, e.g., young males are more frequently stopped.

Location-Based Profiles

  • Being in high-crime neighborhoods or “hot spots” known for drug activity or violence.

  • Presence at locations with a history of illegal activity, like certain gas stations or street corners.

  • Being in a vehicle that matches descriptions from recent crimes.

Vehicle Profiles

  • Vehicles reported stolen or involved in crimes.

  • Older models or cars with missing or altered license plates.

  • Vehicles frequently seen in high-crime areas.

  • Drivers exhibiting erratic driving behavior (speeding, swerving).

Known Associations

  • Individuals who have prior arrests or warrants.

  • Being with known suspects or associates.

  • Matching descriptions broadcasted via radio or alerts.

So, let's say you're wearing something that police don't like and you're about to be pulled over or otherwise harassed by the police.  What can you do to minimize the damage coming your way?

1. Stay Calm and Composed

  • Take deep breaths, keep your voice steady and polite.

  • Avoid shouting, arguing, or aggressive gestures.

2. Follow Lawful Instructions

  • Comply with clear, lawful commands (e.g., show ID, put your hands where they can see).

  • Ask calmly if you don’t understand an order instead of resisting.

3. Keep Your Hands Visible

  • Place hands on the steering wheel or in plain sight.

  • Don’t make sudden movements or reach into pockets without saying so.

4. Avoid Physical Resistance

  • Resisting arrest or struggling increases the chance of force.

  • If you disagree with the arrest, contest it later legally.

5. Use Your Words to De-Escalate

  • Say things like “I’m trying to cooperate” or “Please don’t hurt me.”

  • Avoid profanity or insults.

6. Record the Encounter if Safe

  • Use your phone or a dash cam to document.

  • Let officers know you are recording if it’s safe to do so.

7. Know Your Rights but Stay Safe

  • You have the right to remain silent and the right to an attorney.

  • Exercising your rights calmly is better than physical confrontation.

8. Seek Witnesses

  • If others are nearby, ask them to watch and record.

  • Witnesses can deter excessive force.

Bottom line, when confronted by police, don’t try to fight back physically during the incident because billy clubs hurt.  If you do get a beat down, make sure you get medical help ASAP and document everything.  Finally, report any abuse to internal affairs and consider civil/federal legal action 

Actually, you should probably consider litigation a foregone conclusion. 

I'm just sayin. 

Monday, February 2, 2026

Word of the Month for February 2026: AI

 

OK, OK, so "AI" is not, per se, a word so much as it is an acronym for "Artificial Intelligence."

Great and with that out of the way, what is AI (or artificial intelligence)? 

An overly complex definition of AI is: Artificial Intelligence (AI) is a multidisciplinary domain within computer science and cognitive science that involves the design, development, and analysis of computational systems capable of performing tasks traditionally requiring human cognitive processes such as perception, reasoning, learning, decision-making, and natural language understanding. It encompasses the creation of algorithms and models that enable machines to acquire representations of their environment, generalize from data, adapt to new information, and exhibit goal-directed behavior under varying conditions of uncertainty. AI draws on subfields including machine learning, knowledge representation, heuristic search, and robotics, leveraging statistical methods, neural architectures, and symbolic reasoning to enable autonomous or semi-autonomous systems to optimize actions in complex, dynamic environments while adhering to constraints defined by computational, ethical, and social considerations.

Got all that?

In more simplistic terms, AI is basically a fancy robot brain that tries to fake being smart so you don’t have to be.

That better?

Essentially, Artificial Intelligence is like building a mechanical apprentice that learns by watching, listening, and practicing, just as a human would, so it can help us carry out tasks.

For example:  Imagine teaching a child to sort laundry by colors: you show them examples, correct mistakes, and eventually, they learn to do it on their own.  AI works similarly, but instead of a child, it’s a computer system that learns from examples, patterns, and feedback so it can make decisions, recognize speech, translate languages, or drive a car.

It’s not truly “thinking” like a human, but it mimics parts of human learning and decision-making to help us do things faster, more consistently, and often on a much larger scale.

Still unclear how it works in "real" life?  

Say you're looking to write draft a professional resume for a sales professional (selling cars) and where you only have a few key skills that might be useful and you've worked at McDonald's slinging burgers for the last few years.

AI can crank a really nice one page resume, based on those parameters, for you.  Of course, I'd take the time to make small edits - but it will look sharp.

Maybe the resume you submitted above landed you an interview in front of 21 people.  While only 4 people asked you questions, you still need to send a thank-you letter to all 21 people.  I've done this and it took me 4 days to make each one a little different but relatable using notes I took during the interview(s).

AI can crank out those 21 unique and professional letters just based on their titles alone and do it in under 2 minutes flat and make you look like a superstar.

Maybe you're a lawyer and you need help with your lawyer stuff.  How might AI help you?

  • Streamlined Legal Research:  AI can quickly analyze vast amounts of legal data, identify relevant precedents, and suggest potential arguments, saving lawyers significant time and effort. 
  • Automated Contract Review:  AI tools can scan contracts for key clauses, potential risks, and inconsistencies, accelerating the review process and improving accuracy.
  • Enhanced eDiscovery:  AI can help manage and analyze large volumes of data during litigation, identifying relevant information more efficiently and reducing costs associated with discovery. 
  • Improved Risk Assessment:  AI-powered tools can analyze historical case data and predict potential outcomes, enabling lawyers to better advise clients and mitigate risks.
  • Drafting Legal Documents:  AI can assist in drafting initial versions of motions, briefs, contracts, and other legal documents, saving time and improving consistency.
 Now, as all of that looks great, a HUGE drawback with using AI (particularly in law) is that AI tools, particularly generative AI, can sometimes produce inaccurate or fabricated information (hallucinations), requiring careful human review.
 
Wait, what?
 
Yeah.  There are a plethora of examples where lawyers used artificial intelligence search engines to find cases or even write whole briefs only to find out later that the cases cited therein don't exist.
 
For example:  

Say a lawyer uses an AI tool (like ChatGPT OR Microsoft Copilot OR Google Gemini OR Chatsonic OR Grok OR any AI legal assistant) to draft a legal document. The AI is asked:

“Provide cases supporting the argument that emotional distress damages are recoverable in breach of contract cases in Utah.”

The AI responds with:

“Yes, see Smith v. Jones, 456 P.3d 789 (Utah 2019), where the Utah Supreme Court held that emotional distress damages were recoverable in a breach of contract case.”

However:

  • Problem: Smith v. Jones does not exist.  Well, it might exist somewhere but not with that citation or set of facts or holding or, even at all.  In this example, the AI generated a citation that sounds real but is entirely fabricated (“hallucinated”), including a made-up volume, page number, and holding.

  • The AI pulled patterns from similar cases but created a false case to fit the prompt.

  • If the attorney includes this citation in a filed brief, Attorney could (and probably should) face serious court sanctions, reputational damage, and ethical violations under ABA Model Rule 1.1 (Competence) and Rule 3.3 (Candor Toward the Tribunal).

Can you say oops?  

Two real-world examples of attorneys using hallucinated cases from AI engines include Mata v. Avianca, Inc. (S.D.N.Y., 2023) (aka the “ChatGPT Case”).  I know I've already blogged about this case in an earlier post but it's fun to talk about this stuff and these guys were really reckless.
 
In this case, attorneys Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman used ChatGPT to draft a brief in a personal injury case against Avianca Airlines.
 
The brief included six non-existent cases generated by ChatGPT, such as: Varghese v. China Southern Airlines, Martinez v. Delta Airlines, and Miller v. United Airlines.

The attorneys asked ChatGPT if the cases were real, and ChatGPT falsely assured them they were, even providing fabricated excerpts.

Thing is, had they just Shepardized the cases, they would have discovered the discrepancies and avoided the penalties of being sanctioned with a $5,000 fine and ordered to notify the real judges falsely cited in their brief

In the second REAL case of Park v. Kim (N.Y. Sup. Ct., 2023) – (aka the "Second ChatGPT Sanction Case"), a lawyer in New York used ChatGPT to draft an opposition brief in a personal injury case.  

The brief included false citations to non-existent cases.  Opposing counsel flagged the citations as untraceable. The lawyer admitted to using ChatGPT without verifying the citations (i.e. he didn't Shepardize the cases).

In this second case, the court issued sanctions against the attorney and the lawyer was ordered to pay legal fees to opposing counsel and faced professional embarrassment (basically, he was laughed at all all future bar meetings).

Other cases where attorneys used A.I. to improperly draft legal documents (and were caught) include:

1) United States v. Hayes

  • Jurisdiction: U.S. District Court, Eastern District of California (2005)

  • What happened: A defense lawyer submitted a motion containing a fictitious case and quotation that appeared to be AI-generated. The court ordered the attorney to pay $1,500 and circulated the ruling to local bars and judges.

2) Butler Snow Attorneys (Disqualification Order)

  • Jurisdiction: U.S. District Court, Northern District of Alabama (2025)

  • What happened: Three attorneys from Butler Snow submitted filings with fabricated AI-generated citations in defending Alabama prison officials. The judge found the conduct improper, disqualified the lawyers from the case, and referred the matter to the Alabama State Bar.

3) Indiana Hallucination Citations (Ramirez)

  • Jurisdiction: U.S. District Court, Southern District of Indiana (2024-25)

  • What happened: In briefs for a case involving HoosierVac, an attorney filed multiple briefs with made-up AI-generated case citations. The magistrate judge recommended a $15,000 sanction and noted the lawyer failed to check the AI output.

4) Eastern District of Michigan — Sanctions for AI-Related Errors

  • Jurisdiction: U.S. District Court, Eastern District of Michigan (2025)

  • What happened: Plaintiffs’ counsel included in their responsive briefs real case names with fake quotes or misleading parentheticals that appeared to result from AI hallucinations. The court found Rule 11 violations and imposed monetary sanctions to deter future AI misuse.

5) Sanction (Southern District of Indiana — $6,000 Fine)

  • Jurisdiction: U.S. District Court, Southern District of Indiana (2025)

  • What happened: A federal judge fined an attorney $6,000 for filing briefs that included citations to nonexistent cases generated by an AI tool, emphasizing that such “hallucination cites” must be verified by counsel.  

6) In re Kheir (Bankr. S.D. Tex. 2025) 
  • What Happened: A bankruptcy court found plaintiff’s counsel used generative AI to “manufacture legal authority,” resulting in sanctions including fees, continuing legal ed., and referral to disciplinary counsel.
 
7) In re Marla C. Martin — U.S. Bankruptcy Court, N.D. Ill. (2025)
  • A bankruptcy court found that counsel filed a brief containing fabricated case citations generated by AI (e.g., In re Montoya, In re Jager, etc.) in a Chapter 13 proceeding.

  • The attorney admitted he used ChatGPT for legal arguments and did not verify the generated citations.  The court held this violated Federal Rule of Bankruptcy Procedure 9011 and sanctioned the lawyer and firm with a $5,500 fine and required attendance at an AI education session.

8) Ford v. James Koutoulas & Lgbcoin, Ltd., No. 2:25‑cv‑23896‑BPY, 2025 U.S. Dist. LEXIS 234696 (M.D. Fla. Dec. 2, 2025)

  • What happened: In this federal case, the defendants’ summary judgment motion “contained several citations that the court and the plaintiffs suspected were GenAI hallucinations, where the court was unable to locate the cited authorities.”

In most of these cases, the attorneys faced stiff fines, humiliation at the hands of their peers and public, and some were referred to the State Bar for discipline.  

What is key to note is that prior to 2023, there are no recorded instances where attorneys were caught improperly using AI.

Why?

Simply because the technology wasn't available until around 2023.  Prior to late 2022, there was no generative AI (like ChatGPT) capable of producing case citations.  

Earlier "AI" tools (like Westlaw's KeyCite or Lexis's Shepards) were search and analysis tools (meaning humans searched and analyzed when they got) - not generative drafting tools and neither KeyCite or Shepards produced hallucinated citations.

I suspect what happened is that law students and, subsequently, attorneys got lazy and stopped relying on their own efforts to draft legal documents expecting that computers would continue to be reliable and not churn out non-existant
citations
.  

Deceitful AI
Who knew people would program AI search engines to be deceitful (because algorithms are only as trustworthy as the people who programmed their parameters).

The bottom line to all this here is that as great and helpful and fast as AI is, it is not 100% accurate.  

Consequently, AI should never replace basic legal research practices (including cite checking using Shepards or Key Cite) or remove the human element (i.e. personally editing your own work).

Monday, January 26, 2026

Oops, Wrong Door

You know what bothers me these days?  Well, at the moment it's police who don't really understand or care about the law.  Particularly, the 4th amendment and/or 42 USC § 1983 and the rights we the people are owed under the law.

So how this came up was I was scrolling through Facebook and saw one (of many) videos of a police officer ripping open a partly open window (like what a bear in a national park might do when they smelled - and wanted to get at - food).

 

The thing is, if you do a quick scroll through Internet land, you'll see, quite literally, hundreds (if not thousands) of links and videos of raucous police doing all sorts of things in violation of the United States Constitution (of which they - allegedly - swore to protect).  For example:

Atlanta, Georgia — Martin Family Wrong-House Raid (2017)

  • Date: October 18, 2017

  • Incident: FBI agents and local SWAT mistakenly raided the home of the Martin family, believing it to be a drug suspect’s residence. The family was detained at gunpoint for hours.

  • Legal Action: The family sued the FBI and local police under §1983 for Fourth Amendment violations (illegal search and seizure). The case eventually reached the U.S. Supreme Court addressing sovereign immunity under the Federal Tort Claims Act (FTCA).

  • Outcome: The Supreme Court ruled in Gonzalez v. United States, 579 U.S. 59 (2016), on FTCA immunity issues related to federal agents. The civil suit continued in lower courts, focusing on the wrongful search and damages claims.

Joliet, Illinois — Wrong Home Entry (2021)

  • Date: November 2021

  • Incident: Police executed a search warrant at the wrong address, detaining multiple family members (including children) for several hours while searching the home.

  • Legal Action: The family filed a federal civil rights lawsuit claiming violation of the Fourth Amendment’s protections against unreasonable searches and seizures.

  • Outcome: Pending or settled confidentially; the case raised awareness about the accuracy and verification processes for warrant execution.

Austin, Texas — Shields Family Wrong-House SWAT Raid (2023)

  • Date: August 2023

  • Incident: SWAT executed a no-knock raid on the Shields family home, which was the wrong address, using explosives and detaining occupants. Significant property damage and trauma ensued.

  • Legal Action: A federal civil rights lawsuit was filed alleging violations of Fourth Amendment rights, excessive force, and failure to verify the correct address.

  • Outcome: Lawsuit is ongoing, with motions to dismiss denied, allowing claims to proceed.

King County / Seattle, Washington — Gizachew Wondie False Warrant Case (2023)

  • Date: Incident circa 2018, settlement reached in 2023

  • Incident: Wondie was arrested based on a warrant supported by false statements from a detective. The warrant was invalid, leading to an unlawful arrest and detention.

  • Legal Action: Wondie filed a §1983 civil rights claim alleging false arrest and malicious prosecution based on fabricated evidence.

  • Outcome: King County agreed to settle for $225,000 to resolve the claim without admitting liability.

Berwyn Heights, Maryland — No-Knock Drug Raid on Mayor’s Home (2008)

  • Date: July 29, 2008

  • Incident: Police executed a no-knock warrant at the home of the mayor without sufficient verification, causing property damage and trauma.

  • Legal Action: The mayor filed civil claims alleging Fourth Amendment violations and sought damages for excessive force and destruction of property.

  • Outcome: Several officers were disciplined, and the city settled with the family for an undisclosed amount.

Other Notable Incidents:

March 13, 2020 — Louisville, Kentucky
No-knock raid that killed Breonna Taylor; civil claims and a $12M settlement for the family and federal criminal/civil proceedings followed.

May 25, 2020 — Minneapolis, Minnesota
Killing of George Floyd during an arrest; §1983 excessive-force claims, criminal prosecutions, and large civil settlement for the family. 

January 28, 2019 — Pecan Park (Houston), Texas
Botched no-knock/paramilitary raid (Harding Street raid) that resulted in two civilian deaths and widespread criticism of HPD tactics. 
 
April 2024 — Willard, North Carolina
Alleged mistaken SWAT raid on a mobile home — family filed federal suit claiming knock-and-announce/warrant problems and dangerous forced entry.
 
And these are just cases involving attacks on a home.  Can you imagine the number of incidents involving police pulling people over for alleged crimes on the roads?!
 
Probably a blog for another time.

Bottom line, and call me paranoid but, best you can do is find a good lawyer before you have to find a good lawyer because, by the looks of things, it's not a matter of if police will try to bust down your door, but when.

Monday, January 19, 2026

It Might Be A Conspiracy Theory But We're Still Up 270 to Nil

You know, there are things that are going on around the world that I just don't care about.

Vladimir Putin? Some guy in Russia doing things.

Flooding in North Carolina?  Just a bunch of water with nowhere to go.

Deforestation in the Amazon rain forest? It's a "rain" forest.  Plant more trees.

Smog in India and China?  Send in the AQMD.

Gain of Function research relating to the coronavirus?  Turns out, I'm starting to care about this one if, for no other reason, I really think those stupid white masks didn't do squat except to make my face all hot and sweaty.

In case you were wondering, Gain-of-function (or "GoF") research refers to scientific studies in which an organism—often a virus, bacterium, or other pathogen—is deliberately modified to enhance certain biological properties. 

The “function” in question could be anything from infectivity, transmissibility, virulence (ability to cause disease), or resistance to treatments. Essentially, the research is aimed at giving the organism a new or stronger capability than it naturally possesses.

So, why is this so important?

Well, Gain-of-function research has become important—and controversial—because it sits at the intersection of science, public health, and global safety. Its significance comes from both its potential benefits and risks

I know I'm starting to sound like a 3-year old here but so what and why is that important?

Well, GoF allows scientists to see how viruses or bacteria could naturally evolve. This is important because it helps in risk assessment, vaccine design, and pandemic preparedness. 

OK, alright - but helping to prepare for a pandemic is a good thing, right? 

Well, it would have been a good thing except for the fact that in the case of the Covid virus, it was the GoF that actually caused the whole pandemic.  In fact, a recent report found that a lab-related incident involving gain-of-function research is the most likely the origin of COVID-19

Wait, what?! 

So, here's how things happened in real time: 

  • In 2002–2003, there was a SARS outbreak in China (SARS-CoV-1) with about 8,000 infected and about 800 deaths. 
  • Then, in 2012, there was a MERS outbreak in Saudi Arabia (MERS-CoV) with about 2,500 infected and about 860 deaths. 
  • Finally, in 2019, SARS-CoV-2 emerged in Wuhan, China, causing COVID-19 pandemic. 

After SARS, researchers wanted to understand how corona viruses could mutate to infect humans. GoF experiments were initiated and involved modifying animal coronaviruses to see if they could infect human cells or transmit between animals. 

And if you think Covid was transmitted by a bat, then you'd be right.  However, the prevailing conspiracy theory as it relates to the COVID-19 virus was that it was intentionally released into the general population claims that the virus was either created or weaponized, often suggesting involvement by government or military actors—most notably, the Chinese government or military.  

This theory alleges that release was deliberate, with motives ranging from population control, global disruption, to biological warfare. 

You know, the thing with conspiracy theories is that when they are presented, there is no hard evidence to support them.  Well, no evidence except that most all conspiracy theories in the last few years have been proved true like:

  1. Mass Surveillance (NSA & Tech Companies)

    • Claim: Governments were secretly collecting vast amounts of private data from ordinary citizens.

    • Confirmed: Edward Snowden’s 2013–2014 revelations showed this was real, and reporting into the 2010s–2020s (e.g., PRISM, Section 702 of FISA) confirmed ongoing surveillance practices. Courts and Congress debated reforms through the 2010s and 2020s.

  2. Big Tech Content Manipulation & Censorship

    • Claim: Social media companies were coordinating with governments to suppress or flag certain content.

    • Confirmed: The “Twitter Files” (2022–2023) and related disclosures showed platforms coordinated with U.S. government agencies to influence moderation policies, particularly on COVID-19 and election-related content.

  3. Epstein & Elite Networks

    • Claim: Jeffrey Epstein was connected to powerful political, business, and academic figures and was involved in sex trafficking.

    • Confirmed: Epstein’s 2019 arrest and subsequent reporting showed years of abuse and connections to influential people. His death in custody fueled further speculation, but the trafficking ring itself was proven.

  4. Wuhan Lab Leak Theory (COVID-19 Origin)

    • Claim: COVID-19 may have originated from a laboratory accident in Wuhan, not solely from a wet market.

    • Initially dismissed: Branded as a conspiracy in 2020.

    • Later: By 2021–2023, U.S. intelligence agencies and multiple governments acknowledged the lab-leak hypothesis as a plausible scenario.

  5. Government Involvement in UFO/UAP Programs

    • Claim: The U.S. government was hiding evidence of unidentified aerial phenomena.

    • Confirmed: In 2020–2021, the Pentagon released declassified videos and established the All-domain Anomaly Resolution Office (AARO). Congressional hearings (2022–2023) further confirmed military encounters with unexplained craft.

  6. Pharmaceutical Industry Influence

    • Claim: Big Pharma companies were influencing medical research and public messaging beyond normal lobbying.

    • Confirmed: Opioid litigation (e.g., Purdue Pharma, Johnson & Johnson) revealed deliberate downplaying of addiction risks, with internal documents proving knowledge of harm while aggressively marketing opioids.

  1. Cambridge Analytica & Facebook Data Harvesting (2016–2018)

    • Claim: A private firm was secretly harvesting data from millions of Facebook users to manipulate elections.

    • Confirmed: In 2018, whistleblowers and UK/US investigations revealed Cambridge Analytica harvested data from up to 87 million users without consent to build voter profiles. Facebook admitted failure to protect user data and paid billions in fines.

  2. Boeing 737 MAX Cover-Up (2018–2020)

    • Claim: Boeing and regulators downplayed safety problems with the 737 MAX after two crashes.

    • Confirmed: Investigations showed Boeing concealed critical information about the MCAS flight control system, and the FAA provided lax oversight. Boeing later agreed to a $2.5 billion settlement with the DOJ in 2021.

  3. Volkswagen Emissions Scandal (“Dieselgate”) (2015)

    • Claim: Automakers were cheating emissions tests to appear environmentally compliant.

    • Confirmed: The U.S. EPA revealed Volkswagen had installed “defeat devices” in millions of cars to trick emissions tests. Executives were charged, billions in fines were paid, and the company admitted to the scheme.

  4. Hunter Biden Laptop Story Suppression (2020–2022)

So, while it's a theory now, stick around and who knows what will come up in the next few years.

Anyway, here's where things get fun. 

Dr. Anthony Fauci, the former director of the National Institute of Allergy and Infectious Diseases (NIAID), has faced extensive scrutiny regarding his role in the COVID-19 pandemic, especially the origins of the virus and allegations of criminal negligence. 

In June 2024, Fauci testified before Congress and presented emails he sent early in the pandemic urging the reporting and investigation of valid concerns regarding a possible lab leak. He denied that these messages were an attempted cover-up, calling such allegations “preposterous.”

During his congressional testimony, Dr. Anthony Fauci made several statements about coronavirus research funding that have later been challenged and characterized by some as misleading or untruthful.  

Fauci repeatedly stated that the NIH and NIAID did not fund gain-of-function research at the Wuhan Institute of Virology. For example, in a 2021 Senate hearing, he said, “The NIH has not ever and does not now fund gain-of-function research in the Wuhan Institute of Virology.” 

However, documents later revealed by FOIA requests and confirmed by government officials contradicted this, showing that gain-of-function research was indeed funded at the Wuhan lab through grants approved by NIH/NIAID.

Fauci also testified that staff members working with him did not have conflicts of interest regarding the oversight of coronavirus research grants. 

Later, he acknowledged that at least one senior advisor, Dr. David Morens, had a conflict of interest and was implicated in obstructing congressional investigations into NIH grants related to Wuhan research.

During hearings, Fauci admitted that some COVID-19 mitigation policies, like the “six feet apart” social distancing rule, were not strictly based on science but rather were arbitrary decisions that “just appeared.” This admission raised questions about the scientific rigor behind some public health guidance he promoted.

In short, Fauci’s key false or misleading claims during congressional testimony centered on denying NIH/NIAID funding of gain-of-function research at Wuhan, denying conflicts of interest among his staff, and downplaying the lab leak theory’s credibility despite evidence to the contrary. 

OK, so what?  Unless someone is in an orange jumpsuit, why does this even matter?

It all matters because even if no one individual is criminally liable for the pandemic, understanding the circumstances and ethics surrounding gain-of-function research is crucial to mitigating risks to global health, improving scientific practices, ensuring accountability, and protecting society from potentially catastrophic lab accidents (like what happened with the Coronavirus).

Essentially, if we are unable to "trust the science" or the people behind the science, then when the next pandemic happens, is anyone going to listen?

Monday, January 5, 2026

Word of the Month for January 2026: Holographic Case Law

Given the speed that tech goes these days, this might seem a little behind the times but it's something that needs to be brought up again and again.  

See, back in 2024 (yeah, a lifetime ago), I started using Chatgpt.  

Then around early 2025, as I started cranking out a whole lot of legal research and blog content, I became acquainted with holographic cases which quickly became synonymous with how Chatgpt and other LLMs (i.e. Large Language Model) treated attorneys and case law.

Before we get into the meat of this blog, let's define what I mean by Holographic case law and why it happens.  

HOLOGRAPHIC CASE LAW is/are fictitious legal cases generated by a large language model (LLM) that appear to be real, complete with

  • Case name

  • Citation / authority

  • Page numbers

  • Court jurisdiction

…but do not actually exist in any legal reporter or database.

So, picture it - you're researching a legal topic and you hop over to Chatgpt or Grok or Gemini or Claude or DeepSeek or Perplexity or any of the other few dozen LLMs in Internet-land, and you run a search on that legal topic.  In the course of your searching you see references to several legal-esque cases.  

Problem is that when you try to find those cases via the Internet or try to cite check the cases (like what any 1st year law student should have learned to do), you can't find those cases.

The reason you can't find the cases is because they don't exist.

Wait, doesn't exist?!  

How or why is that even possible?!?  Why might an LLM crank out a case that doesn't exist?

Funny thing, but I had that exact same question and I posed it to Chatgpt which responded with:

That’s a great question — and it’s one of the most important issues when using AI for legal research.

ChatGPT (and other LLMs) sometimes return cases that don’t exist because of the way large language models work:

  • Pattern Matching, Not Databases:
    I don’t have a built-in legal case database. Instead, I was trained on lots of legal writing and I generate responses by predicting what looks like a plausible answer. If you ask for a case citation, I try to produce one that fits the pattern of a real case — correct reporter, volume, page — even if it’s made up.

  • Pressure to Be Helpful:
    If you ask for “a case where X happened,” I try to fulfill that request. If I don’t have a known case in memory, I may invent one that sounds right rather than say “I don’t know,” especially if your prompt suggests that such a case must exist.

  • Legal Citations Are Predictable:
    Because case citations follow a rigid format (e.g., Smith v. Jones, 123 F.3d 456 (9th Cir. 2020)), I can generate something that looks very convincing — even to experienced lawyers. But if you check the reporter, the case might not exist.

  • Contextual Merging (Hallucination):
    Sometimes I blend real cases. For example, I might take facts from Case A and a holding from Case B and give it a citation that matches neither. This is not intentional — it’s a statistical artifact.

This problem is exactly what led to the high-profile Mata v. Avianca (S.D.N.Y. 2023) case — where lawyers used ChatGPT to write a brief, it cited six fake cases, and the court sanctioned them.

Best Practice When Using AI for Legal Research:

  • Treat all AI-supplied case names and citations as leads, not authorities.

  • Verify every case in a trusted source (Westlaw, Lexis, Fastcase, Casetext, etc.) before using it.

  • If you can’t find it in an official reporter or database, assume it doesn’t exist.

While it's nice that Chatgpt admits that it screwed up, what's problematic is that it does so so nonchalantly - as if it's just another day in the rain and it's not problem and that so what, I screwed up - what you gonna do about it?!?
 
And that's the rub, isn't it?  It's not that you can sue an LLM for returning holographic case law - or can you?  I mean, here you are against a deadline and you run a search in an LLM for a "Motion for Summary Judgement with 4 cases on point" and you submit the results to the court only to discover at your hearing for sanctions that the 4 cases on point don't exist (because they're holographic).  
 
Yeah, you could have cite checked the cases before you submitted your motion but you were against a deadline and what self-respecting attorney actually cite checks their work, right?!?
 
As it turns out, existing lawsuits against AI companies focus primarily on copyright infringement, where authors, artists, and news organizations allege their protected works were used without a license to train the LLMs. These cases are distinct from those arising from "hallucinated" case law in legal filings. 
 
Ultimately, the consensus in the legal community is that the onus remains entirely on the human attorney to verify AI-generated work before it is submitted to a court.
 
I mean, it's a novel idea (to sue an LLM instead of cite checking your work before filing with the court) given these days that no one wants to take responsibility to screwing up because it's always someone elses fault, right?
 
You know, as this is a new year and a time for new resolutions, maybe this might be one of your resolutions - to stop blaming Chatgpt for everything wrong with the world.
 
Yes?....No?....can I at least get an Amen?