Wait, what?! What's a deepfake?
A DEEPFAKE is media in which a person’s face, voice, or body is digitally altered to make it appear as though they did or said something they never did. For example:
-
A video of a politician giving a speech they never gave.
-
A celebrity’s face swapped into a movie scene.
-
An audio clip mimicking a person’s voice to scam someone.
So, back to CEO. Apparently, CEO allegedly sent out an voicemail to all his employees to get them to send him their personal information.
Seemed sketchy since CEO hadn't made this types of request before. Turns out he didn't and that the email was a outed as deepfake before anyone lost any data.
Sounds pretty funky, huh? Think I'm making this all up (because who can mimic another person's voice and mannerisms.). Well, turns out these deepfakes happen a LOT. For example:
AI Voice Fraud — Executive Impersonation Leading to Wire Transfers
-
UK Energy Firm (~2019): Scammers used AI to clone the voice of a German parent company’s CEO, perfectly mimicking accent and “melody,” and convinced the UK-based CEO to wire €220,000 (approximately $243,000 USD) to a fake supplier.
Deepfake Video Conference — $25.6M Hong Kong Scam
-
Hong Kong (2024): In a highly sophisticated scheme, employees participated in a video conference featuring deepfakes of their CFO and other colleagues. This led to HK$200 million (~$25.6 million USD) being transferred to fraudsters.
Deepfake Voice — $35M Bank Heist in Hong Kong
-
Hong Kong (2020): A bank manager received what sounded like a phone call from a director using AI-generated voice, instructing a $35 million transfer for a supposed acquisition. The fraudulent request was combined with emails from supposedly real associates, making the scam convincingly authentic.
Rising Trend of Executive Deepfake Scams
Several major companies have been targeted by voice or video deepfake scams aiming to extract sensitive information or payments. Details include:
- Ferrari (2024): A deepfake impersonated CEO Benedetto Vigna in a video call to authorize a fraudulent wire transfer. An executive assistant foiled the scam by asking a security question only the real CEO would know.
- Arup (2024) : Fraudsters impersonated the CFO in a video call and convinced a finance employee to transfer $25 million
- WPP advertising group (2024): A deepfake of CEO Mark Read, using a voice clone and public photos, was used in a scam to solicit money and details from a senior executive. The attempt failed due to employee vigilance.
- LastPass (2024): An employee received an AI-generated audio call and WhatsApp messages impersonating CEO Karim Toubba. The employee became suspicious due to the "forced urgency" and unusual communication channel, and reported it.
- Crypto Exchange (2023): Binance warned about deepfake impersonation scams after its executives were targeted. In one case, a deepfake video of a CEO was used to steal credentials.
- UK Energy Company (2019): An employee wired $243,000 to a fraudulent account after being tricked by a deepfake audio clone of their CEO
While the law is still catching up, major concerns involving deepfakes include privacy, defamation, fraud, and harassment. Ramifications vary by context:
-
Defamation / Reputation Harm
-
If a deepfake falsely portrays someone in a damaging way, they may sue under defamation laws.
-
-
Fraud & Identity Theft
-
Deepfakes used to impersonate someone (e.g., voice cloning for scams) may lead to wire fraud, identity theft, or securities fraud charges.
-
-
Harassment / Nonconsensual Pornography
-
A large portion of harmful deepfakes involve placing individuals’ faces into explicit content without consent. Many states are passing laws criminalizing this.
-
-
Election & Political Law
-
Some states (e.g., Texas, California) have statutes restricting deepfakes in election advertising or political campaigns.
-
-
Intellectual Property
-
Using a celebrity’s likeness without permission may violate right of publicity laws.
-
-
Federal & International Movement
-
In the U.S., there’s no single federal “deepfake law” yet, but bills have been proposed.
-
The EU’s AI Act and China’s regulations require labeling or banning certain deepfakes.
-
So, who is/are creating these deepfakes and why? Money aside, it depends on who you ask and the intent of the entities. Essentially, there are two groups: Malicious actors and Non-malicious creators.
- Individuals and groups: Malicious individuals can create deepfakes for purposes such as extortion, revenge, or harassment.
- Fraudsters and scammers: These criminals use deepfakes for financial fraud and phishing attacks. Recent high-profile cases have involved impersonating company executives on video calls to deceive employees into transferring large sums of money.
- State-sponsored groups and political actors: Foreign intelligence operatives and political parties use deepfakes for disinformation campaigns, election interference, and undermining public trust.
- Content creators and artists: Artists use deepfakes for creative expression, to create memes, or for satire and parody of public figures.
- Researchers and academics: These individuals develop and experiment with deepfake technology to advance AI and machine learning, and to create detection methods for malicious deepfakes.
- The entertainment industry: Filmmakers and visual effects artists use deepfakes for high-tech digital effects, such as de-aging actors or creating digital clones.
My next question would be (and is, since this is a legal-related blog) how have different jurisdictions handled (or have started to handle) these deepfakes?
Minnesota — civil & criminal deepfake protections
- Civil cause of action (nonconsensual sexual deepfakes): Minn. Stat. § 604.32 — “Cause of action for nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts.” (Defines “deep fake,” creates private cause of action, remedies).
- Election-related criminal prohibition: Minn. Stat. § 609.771 (as amended by HF1370/2023) — criminalizes knowingly using deepfake technology to influence an election under specified timing/intent rules. (See HF1370 enacted language and SOS overview.).
California — private right and expanding statutes (nonconsensual digitized sexual material)
- Cal. Civ. Code § 1708.86 (existing right / cause of action for digitized sexually explicit material) — California already had statutory civil remedies for digitized/“deepfake” sexually explicit material; recent legislative action (AB 621 / AB 2839 and related bills) expanded and clarified definitions, added remedies and presumptions against deepfake-porn services. See AB 621 committee analysis for text and changes
- Criminal/other provisions: Recent California updates (and Penal Code cross-references) explicitly treat AI-generated intimate material in various contexts.
Texas — election deepfake statute
- Tex. Elec. Code Ann. § 255.004 (from SB 751, 2019) — one of the earliest state statutes addressing “deepfakes” in election communications (text focuses on video misrepresentations in campaigns). Texas has also passed other bills addressing AI-created intimate imagery.
Virginia — nonconsensual intimate image / deepfake pornography
- Va. Code § 18.2-386.2 et seq. — Virginia’s statute criminalizes creation/distribution of nonconsensual sexually explicit images, and has been applied to deepfakes (statute text and practitioner summaries describe penalties and elements).
New York — amendments to intimate image dissemination law
- NY legislation (e.g., S.1042/A. proposed amendments) — New York bills and amendments explicitly fold “digitized”/deepfake images into unlawful dissemination of intimate images; see NY Senate amendment language (S1042A) that inserts deepfake/digitization language into the statute. (Check final compiled bill text where enacted.)
OK, OK, so the statutes in place don't actually deal with people stealing personal data or squeezing someone for money - rather for sexual gratification and election interference - which are both important but given the rate at which cyber criminal are expanding operations, it's easy to see that governmental entities are lagging behind the times.
This is not to say that there has not be any action in the courts. In fact, there are a number of cases that have dealt with deepfakes in recent years.
Lawsuits related to non-consensual deepfake pornography
- City of San Francisco vs. Deepfake Websites (2025): The City Attorney's office sued websites that generate nonconsensual explicit deepfakes, resulting in a settlement with one company, Briver LLC, for $100,000 and a permanent injunction. The city is continuing litigation against the remaining defendants, some of which are located internationally.
- Kyland Young vs. NeoCortex, Inc. (2023): The reality TV star sued the developer of the deepfake software Reface, alleging the app violated his right of publicity under California law. This case highlights how deepfake apps can be misused.
Cases concerning intellectual property and likeness
- George Carlin Estate vs. Dudesy Podcast (2024): The estate for the late comedian sued the Dudesy podcast for using AI to create a deepfake comedy special titled George Carlin: I'm Glad I'm Dead. The lawsuit was settled quickly, but it brought attention to using AI to replicate an artist's likeness and voice.
- Disney & Universal vs. Midjourney (2025): Major studios filed a lawsuit against the AI image generator Midjourney for the "wholesale appropriation" of their characters, such as Darth Vader and Minions, to train its AI. The suit alleges copyright infringement and dilution of their intellectual property.
- Amazon vs. Illinois Biometric Privacy Class Action (2025): The facial recognition startup Clearview AI agreed to a $50 million settlement in a class-action lawsuit for scraping billions of facial images from the internet without user consent. The suit was brought under the Illinois Biometric Information Privacy Act (BIPA).
Lawsuits involving election interference and misinformation
- New Hampshire Robocall Case (2024): A political consultant was charged with orchestrating a deepfake robocall campaign that used an AI-generated voice mimicking President Biden to deter Democratic voters from casting ballots.
- X (formerly Twitter) vs. California (2025): Elon Musk's social media company X challenged and won a legal victory against a California law restricting election-related deepfakes. A federal judge blocked the law, citing concerns that it could lead to censorship of protected political speech, such as parody.
Other ongoing deepfake-related litigation
- Elon Musk vs. Tesla Wrongful Death Lawsuit (2023): As part of a wrongful death lawsuit involving a Tesla, the company's attorneys questioned the authenticity of a video showing Musk making statements about Tesla's self-driving safety. Musk was ordered to testify under oath to determine the video's authenticity, highlighting how deepfakes can affect the admissibility of evidence in court.
- Mark Walters vs. OpenAI (2025): A radio host sued OpenAI for defamation after ChatGPT generated a false summary that accused him of embezzlement. The court granted summary judgment in favor of OpenAI, ruling that ChatGPT's output was not a factual assertion given the known fallibility of the technology.
and the list goes on and on and... The point to all this is that while AI is helpful, it can also be a pain in the neck because if you can't believe your own eyes, what can you believe in?
I guess the bottom line to all this is stay informed, be aware of your surroundings, and know that everyone is out to get you.
That's not paranoia, that's just gut-reaction common sense.