Deepfake technology creates synthetic images, videos, and audio that mimic real people with near-perfect accuracy. What started as novelty content now powers scams, impersonation, political interference, and nonconsensual pornography. The threat is no longer hypothetical. The law is working to catch up.
While there is still no broad federal ban on all deepfakes, Congress passed its first targeted statute in 2025: the TAKE IT DOWN Act, which criminalizes the distribution of nonconsensual intimate deepfakes and requires platforms to remove them quickly. States continue to move faster in other areas, including political manipulation, impersonation, and consumer deception.
This article breaks down the current legal framework, federal tools, state statutes, and the practical boundaries shaping what’s enforceable today.
Federal Deepfake Legislation and Enforcement Tools
There is no single federal law that bans all deepfakes across all use cases. But existing statutes, now including the TAKE IT DOWN Act, give agencies multiple enforcement paths when synthetic content causes harm through fraud, impersonation, harassment, or commercial deception.
Congress has focused mostly on research and national security. Federal regulators, however, are already applying general statutes to deepfake scenarios, using legal authority originally designed for fraud, consumer protection, and cybercrime.
Here’s how federal enforcement is unfolding now, and where the government is positioning itself next.
National Defense Authorization Act (NDAA)
The NDAA is not a criminal statute; it’s a national security directive. But it plays a foundational role in how federal agencies treat deepfakes.
Recent NDAA provisions require the Department of Defense and related agencies to evaluate how synthetic media affects national security, military operations, and foreign influence campaigns. These provisions were introduced after multiple intelligence reports highlighted deepfakes as a destabilization threat, particularly during elections and in disinformation warfare.
This is not window dressing. The intelligence community now includes deepfakes alongside cyberattacks, foreign propaganda, and other digital threats. In effect, the NDAA pushes deepfakes into national security strategy, setting the stage for future laws and giving agencies formal authority to monitor and report.
While it doesn’t carry penalties on its own, the NDAA ensures synthetic media is no longer a back-burner issue in defense circles. That prioritization influences budget allocation, cross-agency collaboration, and future enforcement infrastructure.
Federal Trade Commission (FTC) Act
The FTC doesn’t need new rules to prosecute deepfake deception. Under Section 5 of the FTC Act, the agency already has broad authority to crack down on unfair or deceptive business practices. That includes synthetic content.
When a business uses a deepfake to mislead consumers by impersonating a spokesperson, faking endorsements, or modifying product visuals, the FTC can take action. The key legal test is whether the content could mislead a reasonable consumer and whether the business gains financially from the deception.
Startups that rely on generative AI to create ads, testimonials, or influencer content are under particular scrutiny. If synthetic media is used without disclosure, or if it creates confusion about the origin of a product or service, the FTC has grounds to act.
The agency has already issued policy guidance warning companies not to misrepresent synthetic content or obscure its AI origins. Companies using deepfakes in customer-facing content need to treat that guidance as enforceable law.
FTC enforcement carries financial penalties, corrective advertising orders, and consent decrees. The cost of noncompliance is not theoretical; it hits revenue and restricts future campaigns.
U.S. Criminal Code and Related Statutes
There is no standalone deepfake crime in federal law. But federal prosecutors are increasingly applying existing criminal statutes to synthetic media abuse. The framework is already in place. It’s being adapted.
Charges may include:
- Wire fraud: If a deepfake is used to defraud individuals or businesses, such as impersonating a CEO in a phishing video, the underlying conduct supports federal wire fraud charges.
- Identity theft: When synthetic media is used to impersonate a real person, particularly to obtain value or mislead third parties, identity theft charges are on the table.
- Cyberstalking and harassment: Deepfakes used to intimidate, harass, or target individuals, especially in the context of nonconsensual sexual imagery or revenge porn, can trigger criminal harassment or cyberstalking claims.
- Extortion: If someone uses a deepfake to coerce another person into giving up money, access, or personal information, extortion laws apply.
These cases turn on intent. Prosecutors look for evidence that the person distributing the deepfake knew it was false, intended to cause harm, and took active steps to conceal or weaponize the content. Where that evidence exists, enforcement can move quickly.
The real challenge is volume. Enforcement resources are limited. But the legal tools are already in use, and businesses or individuals caught misusing deepfakes may find themselves facing federal charges, even if the content was created for “entertainment” or “testing” purposes.
TAKE IT DOWN Act (2025)
In 2025, Congress passed the TAKE IT DOWN Act, America’s first federal law directly regulating deepfake abuse. The law targets nonconsensual intimate content, including synthetic images or videos that depict real individuals in sexual acts without their consent.
Platforms must remove the content upon notification, or face penalties. Victims do not need to prove reputational damage or financial loss; the mere unauthorized creation or distribution is enough. This includes content created entirely by AI if it falsely depicts a real person.
This law closes a key gap. It imposes both criminal and civil consequences and forces online platforms to act. While limited in scope, it marks a shift: the federal government is no longer on the sidelines.
State-Level Deepfake Legislation Across the U.S.
Federal law may lag, but state legislatures are moving quickly. As deepfake technology outpaces public understanding, lawmakers are filling the gaps, especially where the harm is personal, reputational, or political.
Most state laws do not ban deepfakes outright. Instead, they target specific use cases with clear public consequences: manipulated political content, nonconsensual sexual imagery, and identity theft. Enforcement tools range from civil remedies to criminal charges, with penalties escalating based on impact and intent.
The result remains a fragmented but increasingly comprehensive legal landscape. By 2025, nearly every state will have enacted at least one deepfake-related statute, targeting specific use cases like nonconsensual pornography, political manipulation, or digital impersonation. The laws vary in scope and enforceability, but the era of legal silence is over.
California Deepfake Laws
California led the country with two focused statutes:
- Penal Code § 632.01 makes it a crime to create or share sexually explicit deepfake content involving a real person without their consent. The law does not require proof that the subject suffered financial loss. The violation itself is enough.
- Elections Code § 20010 prohibits the distribution of deepfakes that falsely portray candidates in political ads within 60 days of an election. The law applies whether the manipulation is video, audio, or both, and whether it was created in-state or not.
Both laws are narrowly written and built for targeted enforcement. They do not restrict all synthetic media. They zero in on use cases where the risk of public deception or personal harm is highest.
California’s approach reflects a policy decision: not to outlaw the technology, but to regulate its misuse where damage is provable and immediate.
New York Deepfake Laws
New York’s deepfake statute focuses on digital likeness protection. It expands existing right of publicity laws to cover synthetic media, including nonconsensual pornography and commercial uses of an individual’s likeness.
What sets New York apart is its posthumous protection. The law allows estates to sue on behalf of deceased individuals, critical for celebrities, influencers, or public figures whose NIL (name, image, likeness) rights continue to generate revenue after death.
New York’s statute is designed for commercial enforcement. It gives rightsholders and brands a way to stop unauthorized uses of digital replicas, especially in advertising or media. As deepfakes become tools for synthetic endorsement or fake branding, this framework will become a model for NIL-driven litigation.
Texas Deepfake Laws
Texas takes a hard line on political deepfakes. Under Election Code § 255.004, it is a Class A misdemeanor to distribute manipulated video or audio that harms a political candidate within 30 days of an election.
This law is tightly drafted but built for speed. It allows for fast investigation and criminal penalties if the synthetic content misleads voters or falsely represents a candidate. The scope is narrowly focused on election periods, but the enforcement posture is clear.
Texas treats deepfake interference as electoral misconduct rather than protected speech. While courts may still scrutinize these laws under the First Amendment, the state’s enforcement posture is clear.
Other State Deepfake Legislation
Several other states are following similar paths:
- Virginia criminalized the distribution of deepfake pornography without consent, joining California and New York in targeting sexual exploitation.
- Maryland added penalties for digital impersonation that causes reputational harm or financial loss, even if the content does not involve nudity.
- Illinois expanded its biometric privacy and right of publicity laws to cover AI-generated media, reinforcing its already aggressive stance on digital privacy and facial recognition.
New bills are being introduced across multiple states, focused on:
- Deepfakes in political advertising
- Unauthorized commercial use of voice or image
- AI-generated impersonation and digital fraud
Expect acceleration. As election cycles heat up and AI tools become easier to access, legislatures will act fast to create headline-ready laws with fast enforcement options.
Where Deepfake Legislation Applies Most Directly
While deepfake technology raises broad ethical questions, the law only steps in when specific harms surface. Right now, enforcement centers on three use cases: political interference, sexual exploitation, and fraud. These categories define where the risk is real and where liability is already landing.
Political Advertising and Election Interference
States like California and Texas have passed laws that directly prohibit deceptive deepfake content used in political campaigns. These laws are narrow but forceful. They prohibit synthetic media that misrepresents a candidate’s speech or actions in the run-up to an election.
Some laws require disclosure. Others flat out ban the distribution of misleading AI-generated content within a defined pre-election window. The goal is to prevent voter manipulation. The risk for companies or individuals distributing this content knowingly or not is rapid enforcement, reputational fallout, and, in some cases, criminal charges.
Political deepfakes are now considered election interference. That means enforcement will move fast.
Nonconsensual Pornography and Digital Sexual Abuse
This is where deepfake laws are most aggressive. Many states impose both criminal and civil penalties for creating or distributing sexually explicit content involving someone’s likeness without consent.
Consent is not implied. The law doesn’t care whether the image was publicly available or whether the person is a public figure. If the synthetic content creates the appearance of real sexual conduct and the person didn’t authorize it, enforcement is available.
Victims can seek restraining orders, monetary damages, and criminal prosecution. This applies equally to creators, distributors, and platforms that fail to act after notice.
Deepfake Driven Fraud and Defamation
When deepfakes are used to impersonate executives, forge business deals, or spread false claims about public figures or brands, the legal exposure shifts to civil litigation.
Fraud statutes apply when synthetic media is used to extract value, whether through fake pitches, phishing scams, or doctored videos that mislead partners or customers. Defamation law applies when deepfakes damage reputation by attributing false actions or statements to real people.
Courts are expanding how traditional laws apply to synthetic content. The test isn’t whether the content is real. It’s whether the harm is.
With the TAKE IT DOWN Act in force and state statutes multiplying, creators and distributors of synthetic media face growing legal exposure. The standard is shifting from “can I do this?” to “can I defend it?”
How to Mitigate Legal Risks Under Deepfake Legislation
If your business uses synthetic media or could be impersonated by it, you need a defense strategy built in.
- Label synthetic content clearly. Disclosure may be legally required, especially in political, advertising, or commercial settings. Even where not required, disclosure limits liability by signaling transparency.
- Use contracts to block misuse. When licensing likeness rights, voice data, or content inputs, your agreements should prohibit synthetic use unless specifically authorized. This protects brand equity and limits litigation risk if something goes sideways.
- Monitor your identity and brand. Deepfake impersonation doesn’t need your consent to go live. Track use across social platforms, ad networks, and AI tools. If your image, name, or voice is being misused, act immediately.
- Escalate fast when abuse appears. Takedown notices, platform complaints, and cease and desist letters all work, but only if they go out before the content spreads. When misuse surfaces, delay costs control.
Navigate Synthetic Media Law Before It Becomes a Crisis
Deepfake legislation is still evolving, but the risks are here now. Whether you’re creating AI-generated content, licensing likeness rights, or building a brand vulnerable to impersonation, the legal posture matters. The companies that survive regulatory scrutiny will be the ones that structured early and escalated quickly when needed.
Traverse Legal helps clients lock down risk before synthetic content turns into a legal or reputational liability. That includes reviewing contracts for AI use, responding to unauthorized impersonation, and helping public figures, brands, and platforms enforce their rights.
Federal law has entered the field, starting with the TAKE IT DOWN Act. State law is no longer the ambient context. It’s a frontline constraint. Anyone creating, distributing, or reacting to synthetic media must plan accordingly.
You don’t need to wait for legislation to catch up. You need a strategy that holds up now.
The post Deepfake Legislation: What the Law Covers Today and Where It’s Going first appeared on Traverse Legal.