Digital face recognition and AI technology — deepfake video scam context

Deepfake technology — AI-generated video that maps one person's face onto another's body — has created a category of harm that didn't exist five years ago. Victims discover their face has been placed in pornographic videos they never appeared in, used in executive impersonation fraud that cost their company hundreds of thousands of dollars, or deployed in fake "emergency" videos designed to extort money from family members.

The National Center for Missing and Exploited Children reported a dramatic increase in deepfake sextortion cases involving minors. The FBI received over 7,000 reports of non-payment sextortion (including deepfake-based extortion) in a single year. If this has happened to you, this guide provides the exact steps to take.

Types of Deepfake Scams You May Be Facing

Deepfake Sextortion

A scammer takes photos from your social media or other public sources, uses AI deepfake tools to place your face on pornographic content, then contacts you threatening to share the fabricated video with your contacts, employer, or family unless you pay. This is the most common individual-targeting deepfake scam. The content is fabricated — but the threat is real enough to cause serious psychological harm regardless.

CEO / Executive Fraud Deepfakes

Criminals create deepfake video of corporate executives — real people — to impersonate them in video calls with finance or accounting staff. The "executive" authorizes urgent wire transfers. In 2024, a finance worker at a Hong Kong company transferred $25 million USD after a deepfake video call where everyone on the call appeared to be company executives. This type of fraud is increasingly targeting mid-sized businesses where verification protocols are less rigorous.

Family Emergency Deepfakes

Scammers create short deepfake or AI-voice clips of family members in apparent distress, then call grandparents or parents claiming the family member is in jail, in an accident, or in danger. This is an evolution of the "grandparent scam" — now with a convincing AI-generated voice or short video clip to make the deception more credible.

Identity Fraud Using Your Face

Some deepfake operations use your face to create fake ID documents, to pass KYC (know your customer) verification at financial institutions, or to create convincing fake social media profiles for romance or investment scams targeting others.

Immediate Steps — Do These First

Right Now

Preserve Evidence Before Anything Else

⚠ Critical: Scammers often set deadlines designed to induce panic. The deadline is manufactured. Do not let artificial urgency cause you to pay before consulting law enforcement. The FBI has experience with this specific situation.

Reporting to the FBI, FTC, and NCMEC

FBI Internet Crime Complaint Center

File at ic3.gov (general internet crimes) or submit an online tip at tips.fbi.gov. The FBI has specific units handling sextortion and deepfake extortion cases. Include every piece of evidence: scammer contact info, content URLs, wallet addresses, and all communications.

For sextortion cases, the FBI's guidance is explicit: do not pay. Law enforcement has seen thousands of these cases and has strategies for dealing with them. Filing a report also protects you legally — it documents that you were victimized, which matters if content ends up being associated with your name in a background check or other context.

Federal Trade Commission

File at reportfraud.ftc.gov. Select "Impersonation scam" as the category. The FTC compiles data that drives regulatory action and consumer alerts.

NCMEC — If a Minor Is Involved

If the deepfake victim is under 18, or if you're an adult being threatened with content that was created from photos taken when you were a minor, contact the National Center for Missing and Exploited Children immediately:

NCMEC has direct relationships with platforms and law enforcement and can trigger responses that individual reports cannot. This is especially critical for any content involving minors.

Digital binary code on screen — representing AI deepfake technology

Platform Takedown Requests

Getting deepfake content removed from platforms requires a multi-track approach:

StopNCII.org — Your First Stop

StopNCII.org (Stop Non-Consensual Intimate Images) is a free service that creates a "hash" — a digital fingerprint — of images and videos. This hash is shared with partner platforms (including Facebook, Instagram, TikTok, Snapchat, and others) so they can automatically detect and remove the content without you having to repeatedly report it. You can submit content you control (your own photos) to generate preventive hashes. This is the highest-leverage single action for non-consensual intimate images.

DMCA Takedown Notices

You hold copyright in your own likeness and in any photos you've taken of yourself. A DMCA (Digital Millennium Copyright Act) takedown notice filed with a platform requires them to remove content or face liability. Most platforms have DMCA submission forms in their help centers. Key elements of a DMCA notice:

Google Content Removal

Google has a specific removal request tool for non-consensual explicit imagery. Once the content is removed from the source platform, use Google's removal tool to de-index any search results pointing to cached or mirror copies.

Platform-Specific Reporting

All major platforms have policies against non-consensual intimate imagery (NCII). File reports with each platform where content appears:

The legal landscape for deepfakes is rapidly evolving. As of 2026, these states have specific deepfake or non-consensual intimate image laws that may apply:

States With Deepfake-Specific Laws

Federal Law — Current Status

As of 2026, the US does not have a comprehensive federal deepfake law, but several bills are in various stages of legislative progress. Existing federal laws (Computer Fraud and Abuse Act, cyberstalking statutes, wire fraud) can apply to deepfake extortion cases. The FBI has prosecuted deepfake sextortion under wire fraud statutes.

Consulting a Lawyer

Many attorneys now specialize in online harassment and non-consensual imagery cases. Some work on contingency for civil cases where significant damages are possible. The Cyber Civil Rights Initiative maintains a legal referral network and can connect victims with attorneys in their state.

🛡️ Monitor Your Identity During Recovery

Deepfake scammers often combine image abuse with identity theft attempts. Identity monitoring services can alert you if your personal data is misused.

Emotional Support Resources

Deepfake victimization — especially involving intimate imagery — carries a distinctive psychological burden. Even knowing the content is fabricated doesn't eliminate the violation of having your face used in that way. The fear of exposure, the loss of control over your image, and the potential professional and personal consequences create a complex trauma response.

Many victims report ongoing hypervigilance around their online presence, avoidance of social media, and erosion of trust in digital communication generally. These are normal responses to an abnormal violation.

See also our guide: Emotional Recovery After an AI Scam: You're Not Stupid — You Were Targeted — which covers the psychology of AI fraud victimization and specific recovery strategies.

Prevention Going Forward

After a deepfake incident, many victims restrict their public image significantly. That's a personal choice. Some practical steps that reduce risk without requiring complete digital withdrawal:

For comprehensive guidance on preventing AI scams before they happen, see PreventAIScams.com.

Related Resources

Frequently Asked Questions

What should I do immediately if I'm a deepfake scam victim?

Do not pay any demands. Screenshot everything — the message, the deepfake content URL, any payment demands. Preserve all evidence before it disappears. Report to the FBI at tips.fbi.gov and file with the FTC. Document all scammer contact details and platform usernames.

Is it illegal to create a deepfake of someone without consent?

It depends on your state. Texas, California, Virginia, and Georgia have laws specifically criminalizing non-consensual deepfake pornography. Many states also have cyberstalking and harassment laws that may apply. Federal law doesn't yet have a comprehensive deepfake statute, but wire fraud and cyberstalking statutes have been used to prosecute these cases.

How do I get a deepfake video taken down from the internet?

File a DMCA takedown notice claiming copyright in your own likeness. Contact the platform directly using their non-consensual intimate image policy. Register with StopNCII.org to generate a hash that blocks cross-platform uploads. Use Google's removal tool to de-index search results once the source is removed.

What is a CEO deepfake scam?

CEO deepfake fraud uses AI-generated video of executives to authorize fraudulent wire transfers. Employees receive what appears to be a video call from their CEO or CFO ordering an urgent fund transfer. Always verify large transfers through a separate, known phone number — never based solely on a video call, no matter how convincing.

Should I pay if a scammer threatens to release a deepfake of me?

No. Paying almost always escalates demands — scammers know you'll pay again. Report immediately to the FBI (tips.fbi.gov) and preserve all evidence. The FBI has extensive experience with sextortion and deepfake extortion cases and specifically advises against payment.

Stay One Step Ahead

Get alerts on new AI scam types and recovery resources. No spam — only what matters.