Deepfake technology — AI-generated video that maps one person's face onto another's body — has created a category of harm that didn't exist five years ago. Victims discover their face has been placed in pornographic videos they never appeared in, used in executive impersonation fraud that cost their company hundreds of thousands of dollars, or deployed in fake "emergency" videos designed to extort money from family members.
The National Center for Missing and Exploited Children reported a dramatic increase in deepfake sextortion cases involving minors. The FBI received over 7,000 reports of non-payment sextortion (including deepfake-based extortion) in a single year. If this has happened to you, this guide provides the exact steps to take.
Types of Deepfake Scams You May Be Facing
Deepfake Sextortion
A scammer takes photos from your social media or other public sources, uses AI deepfake tools to place your face on pornographic content, then contacts you threatening to share the fabricated video with your contacts, employer, or family unless you pay. This is the most common individual-targeting deepfake scam. The content is fabricated — but the threat is real enough to cause serious psychological harm regardless.
CEO / Executive Fraud Deepfakes
Criminals create deepfake video of corporate executives — real people — to impersonate them in video calls with finance or accounting staff. The "executive" authorizes urgent wire transfers. In 2024, a finance worker at a Hong Kong company transferred $25 million USD after a deepfake video call where everyone on the call appeared to be company executives. This type of fraud is increasingly targeting mid-sized businesses where verification protocols are less rigorous.
Family Emergency Deepfakes
Scammers create short deepfake or AI-voice clips of family members in apparent distress, then call grandparents or parents claiming the family member is in jail, in an accident, or in danger. This is an evolution of the "grandparent scam" — now with a convincing AI-generated voice or short video clip to make the deception more credible.
Identity Fraud Using Your Face
Some deepfake operations use your face to create fake ID documents, to pass KYC (know your customer) verification at financial institutions, or to create convincing fake social media profiles for romance or investment scams targeting others.
Immediate Steps — Do These First
Preserve Evidence Before Anything Else
- Do not pay any demand. Payment confirms your identity, proves you'll pay, and invites escalating demands. Paying does not guarantee content removal — it guarantees you become a known target.
- Screenshot everything immediately. Capture the extortion message, the URL where the content is hosted, any wallet addresses or payment instructions, and any contact information used by the scammer.
- Record the content URL without clicking — copy the URL directly from the message. Do not click links that might expose your IP or install malware.
- Note all identifiers: username, email address, phone number, social media profiles, any platform names.
- Do not delete your own messages. Even if you responded, your communications are part of the evidence trail law enforcement needs.
- Back up all evidence immediately to cloud storage (Google Drive, iCloud, Dropbox). If your device is seized, compromised, or fails, you still have your evidence.
Reporting to the FBI, FTC, and NCMEC
FBI Internet Crime Complaint Center
File at ic3.gov (general internet crimes) or submit an online tip at tips.fbi.gov. The FBI has specific units handling sextortion and deepfake extortion cases. Include every piece of evidence: scammer contact info, content URLs, wallet addresses, and all communications.
For sextortion cases, the FBI's guidance is explicit: do not pay. Law enforcement has seen thousands of these cases and has strategies for dealing with them. Filing a report also protects you legally — it documents that you were victimized, which matters if content ends up being associated with your name in a background check or other context.
Federal Trade Commission
File at reportfraud.ftc.gov. Select "Impersonation scam" as the category. The FTC compiles data that drives regulatory action and consumer alerts.
NCMEC — If a Minor Is Involved
If the deepfake victim is under 18, or if you're an adult being threatened with content that was created from photos taken when you were a minor, contact the National Center for Missing and Exploited Children immediately:
- Hotline: 1-800-THE-LOST (1-800-843-5678)
- CyberTipline: missingkids.org/gethelpnow/cybertipline
NCMEC has direct relationships with platforms and law enforcement and can trigger responses that individual reports cannot. This is especially critical for any content involving minors.
Platform Takedown Requests
Getting deepfake content removed from platforms requires a multi-track approach:
StopNCII.org — Your First Stop
StopNCII.org (Stop Non-Consensual Intimate Images) is a free service that creates a "hash" — a digital fingerprint — of images and videos. This hash is shared with partner platforms (including Facebook, Instagram, TikTok, Snapchat, and others) so they can automatically detect and remove the content without you having to repeatedly report it. You can submit content you control (your own photos) to generate preventive hashes. This is the highest-leverage single action for non-consensual intimate images.
DMCA Takedown Notices
You hold copyright in your own likeness and in any photos you've taken of yourself. A DMCA (Digital Millennium Copyright Act) takedown notice filed with a platform requires them to remove content or face liability. Most platforms have DMCA submission forms in their help centers. Key elements of a DMCA notice:
- Your contact information (name, address, email, phone)
- Description of the copyrighted work (your likeness, original photos used)
- URL of the infringing content
- Statement that you have a good faith belief the use is unauthorized
- Statement that the information is accurate, under penalty of perjury
- Your signature
Google Content Removal
Google has a specific removal request tool for non-consensual explicit imagery. Once the content is removed from the source platform, use Google's removal tool to de-index any search results pointing to cached or mirror copies.
Platform-Specific Reporting
All major platforms have policies against non-consensual intimate imagery (NCII). File reports with each platform where content appears:
- Meta (Facebook/Instagram): Use the in-app reporting, then escalate to the Safety Center if not resolved
- X (Twitter): Report under "non-consensual nudity" policy
- Reddit: Report under non-consensual intimate media policy, then contact r/legaladvice moderators
- Pornography platforms: File under DMCA and non-consensual content — most major platforms respond within 24–48 hours to properly formatted complaints
Legal Options by State
The legal landscape for deepfakes is rapidly evolving. As of 2026, these states have specific deepfake or non-consensual intimate image laws that may apply:
States With Deepfake-Specific Laws
- Texas: The Intimate Visual Material (IVM) law covers digitally altered content, including deepfakes. Civil and criminal penalties apply.
- California: AB 602 and AB 730 address deepfake porn and political deepfakes respectively. AB 602 allows civil suits against creators.
- Virginia: Added deepfakes to its non-consensual pornography statute — a Class 1 misdemeanor, upgraded to a felony in some circumstances.
- Georgia: Non-consensual intimate images law covers digitally altered content. Criminal penalties and civil right of action.
Federal Law — Current Status
As of 2026, the US does not have a comprehensive federal deepfake law, but several bills are in various stages of legislative progress. Existing federal laws (Computer Fraud and Abuse Act, cyberstalking statutes, wire fraud) can apply to deepfake extortion cases. The FBI has prosecuted deepfake sextortion under wire fraud statutes.
Consulting a Lawyer
Many attorneys now specialize in online harassment and non-consensual imagery cases. Some work on contingency for civil cases where significant damages are possible. The Cyber Civil Rights Initiative maintains a legal referral network and can connect victims with attorneys in their state.
🛡️ Monitor Your Identity During Recovery
Deepfake scammers often combine image abuse with identity theft attempts. Identity monitoring services can alert you if your personal data is misused.
Emotional Support Resources
Deepfake victimization — especially involving intimate imagery — carries a distinctive psychological burden. Even knowing the content is fabricated doesn't eliminate the violation of having your face used in that way. The fear of exposure, the loss of control over your image, and the potential professional and personal consequences create a complex trauma response.
Many victims report ongoing hypervigilance around their online presence, avoidance of social media, and erosion of trust in digital communication generally. These are normal responses to an abnormal violation.
- Cyber Civil Rights Initiative Crisis Line: 1-844-878-2274 — specifically for victims of non-consensual intimate images, including deepfakes
- Crisis Text Line: Text HOME to 741741
- RAINN Hotline: 1-800-656-HOPE — for sexual violence and related trauma
- National Alliance on Mental Illness (NAMI): 1-800-950-NAMI
See also our guide: Emotional Recovery After an AI Scam: You're Not Stupid — You Were Targeted — which covers the psychology of AI fraud victimization and specific recovery strategies.
Prevention Going Forward
After a deepfake incident, many victims restrict their public image significantly. That's a personal choice. Some practical steps that reduce risk without requiring complete digital withdrawal:
- Audit your social media privacy settings. Public profile photos are the primary source material for deepfakes. Restricting photo visibility to friends reduces the supply of source material.
- Watermark professional photos if you must use them publicly — a subtle watermark doesn't prevent deepfakes but may add friction and evidence.
- Set up Google Alerts for your name to monitor for any new content appearing online that mentions you.
- Pre-register with StopNCII.org using photos of yourself — this creates preventive hashes before any content is weaponized.
- Establish family verification protocols. Create a code word your family uses to verify identity in emergency calls — critical for defeating family emergency deepfake scams.
- For businesses: Implement dual-authorization for large wire transfers that cannot be bypassed by a single video call, regardless of who appears to be on the call. See CISA's guidance at cisa.gov.
For comprehensive guidance on preventing AI scams before they happen, see PreventAIScams.com.
More Recovery Guides
Related Resources
- Remove your personal data from broker databases After being scammed, removing your data from broker sites reduces future risk.
- How to prevent AI scams before they happen Prevention is the best defense.
- Latest AI scam alerts and warnings Stay current on new AI fraud tactics.
Frequently Asked Questions
What should I do immediately if I'm a deepfake scam victim?
Do not pay any demands. Screenshot everything — the message, the deepfake content URL, any payment demands. Preserve all evidence before it disappears. Report to the FBI at tips.fbi.gov and file with the FTC. Document all scammer contact details and platform usernames.
Is it illegal to create a deepfake of someone without consent?
It depends on your state. Texas, California, Virginia, and Georgia have laws specifically criminalizing non-consensual deepfake pornography. Many states also have cyberstalking and harassment laws that may apply. Federal law doesn't yet have a comprehensive deepfake statute, but wire fraud and cyberstalking statutes have been used to prosecute these cases.
How do I get a deepfake video taken down from the internet?
File a DMCA takedown notice claiming copyright in your own likeness. Contact the platform directly using their non-consensual intimate image policy. Register with StopNCII.org to generate a hash that blocks cross-platform uploads. Use Google's removal tool to de-index search results once the source is removed.
What is a CEO deepfake scam?
CEO deepfake fraud uses AI-generated video of executives to authorize fraudulent wire transfers. Employees receive what appears to be a video call from their CEO or CFO ordering an urgent fund transfer. Always verify large transfers through a separate, known phone number — never based solely on a video call, no matter how convincing.
Should I pay if a scammer threatens to release a deepfake of me?
No. Paying almost always escalates demands — scammers know you'll pay again. Report immediately to the FBI (tips.fbi.gov) and preserve all evidence. The FBI has extensive experience with sextortion and deepfake extortion cases and specifically advises against payment.
Stay One Step Ahead
Get alerts on new AI scam types and recovery resources. No spam — only what matters.
Your email is never shared. Unsubscribe anytime.