Deepfake technology — the synthesis of realistic video, audio, and images using generative AI — has moved from research lab curiosity to mainstream fraud tool in under five years. Detection is increasingly difficult, costs to generate deepfakes have collapsed, and the financial scale of deepfake-enabled fraud is now measurable in billions of dollars annually. This page aggregates the most current published data from law enforcement, cybersecurity firms, and academic researchers. All figures are sourced and cited. This reference is designed for journalists, legal professionals, policymakers, and victims seeking authoritative context.

AI scam and cybersecurity statistics — laptop displaying security code
3,000%
Increase in deepfake fraud attempts detected in 2023 compared to 2022, across Onfido's identity verification platform.
— Onfido Identity Fraud Report, 2024

Table of Contents

  1. Scale & Growth of Deepfake Fraud
  2. Financial Losses
  3. Corporate Deepfake BEC Cases
  4. Identity Verification & KYC Fraud
  5. Detection Rates & Limitations
  6. Non-Consensual Deepfakes
  7. Law Enforcement Response
  8. Frequently Asked Questions

Scale & Growth of Deepfake Fraud

10×
Increase in deepfakes detected on Sumsub's platform between 2022 and 2023
— Sumsub Identity Fraud Report, 2023
2,137%
Growth in AI-generated synthetic identity fraud 2021–2023
— Sumsub Identity Fraud Report, 2023
The cost to produce a convincing deepfake video of a real individual has dropped from approximately $10,000 in 2019 to under $10 in 2024, driven by open-source model releases and commoditized cloud GPU access. — Europol Innovation Lab, "Facing Reality? Law Enforcement and the Challenge of Deepfakes," 2023
Europol documented a 240% increase in deepfake fraud complaints filed with financial institutions and law enforcement agencies across EU member states between 2022 and 2023. — Europol IOCTA (Internet Organised Crime Threat Assessment), 2024
Sensity AI (now part of iProov) estimated that 96% of deepfake videos online in 2023 were non-consensual sexual images, while the remaining 4% were used in financial fraud, political disinformation, and impersonation. — Sensity AI Threat Intelligence Report, 2023
The number of deepfake incidents reported to the Internet Watch Foundation (IWF) more than doubled in 2023 compared to 2022, with financial fraud cases representing the fastest-growing subcategory. — Internet Watch Foundation, Annual Report, 2023
Microsoft's Azure AI team estimates that state-of-the-art deepfake generation tools can now produce a realistic 60-second video of a target from as few as 3 seconds of reference audio and a single photograph. — Microsoft Digital Defense Report, 2024

Financial Losses from Deepfake Fraud

Identity verification firm Sumsub estimated global financial losses attributable to deepfake-enabled fraud exceeded $25 billion annually as of 2024, based on incident rate extrapolation from verified fraud cases. — Sumsub Identity Fraud Report, 2024
A single deepfake BEC attack on a multinational company's Hong Kong office in January 2024 resulted in a $25 million (HKD 200 million) wire transfer, after employees were deceived by a deepfake video conference featuring synthetic versions of their CFO and colleagues. — Hong Kong Police Force Press Release, February 2024; Reuters, February 2024
In 2019, a UK energy company CEO was deceived by an AI-generated voice call mimicking his German parent company's CEO, resulting in a €220,000 fraudulent wire transfer — one of the first documented deepfake voice BEC cases. — Wall Street Journal, "Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case," 2019
Deepfake-powered investment scams using synthetic celebrity endorsement videos generated an estimated $1.7 billion in victim losses globally in 2023, affecting victims in at least 37 countries. — Global Anti-Scam Alliance, Global State of Scams Report, 2024
The UK's Finance Against Scams (UK Finance) reported that fraud losses attributable to impersonation — including AI voice and video deepfakes — totaled £485 million in 2023. — UK Finance, Annual Fraud Report, 2024

Corporate Deepfake BEC: Documented Cases

The FBI issued a private industry alert in 2023 warning financial institutions of an increase in deepfake video calls used to apply for remote work positions at companies with access to sensitive financial systems and customer data. — FBI Private Industry Notification (PIN) 20220628-001, 2022; follow-up reporting 2023
At least 58 documented BEC incidents reported to European financial supervisory authorities between 2022 and 2024 involved AI voice cloning or deepfake video impersonation of senior executives. — Europol IOCTA, 2024
In 2023, a U.S. financial services firm avoided a $35 million loss when a fraud prevention officer recognized behavioral inconsistencies in a deepfake video call purportedly from the company's CFO requesting an emergency wire transfer. — FS-ISAC (Financial Services Information Sharing and Analysis Center) Case Summary, 2023
Gartner predicted in 2023 that by 2026, 30% of enterprises will consider identity verification solutions unreliable in isolation due to AI-generated deepfakes, necessitating multi-factor biometric and behavioral approaches. — Gartner, "Predicts 2024: Identity and Access Management," 2023

Identity Verification & KYC Fraud

Deepfake-assisted fraud attempts against Know Your Customer (KYC) onboarding processes at financial institutions rose 3,000% between 2022 and 2023, according to Onfido's analysis of 1 billion identity documents processed annually. — Onfido Identity Fraud Report, 2024
Face-swap deepfakes — where a fraudster's live face is replaced in real time during a video KYC call — now account for 13.8% of all identity fraud attempts on Sumsub-protected platforms as of Q3 2023. — Sumsub Identity Fraud Report, 2023
Cryptocurrency exchanges and crypto-fiat on-ramps experience the highest deepfake KYC attack rates among regulated financial service types, with some platforms reporting 1 in 7 verification attempts flagged as synthetic or deepfake in 2024. — iProov Biometric Threat Intelligence Report, 2024
iProov's 2024 Threat Intelligence Report found that 55% of identity fraud attacks now use some form of AI-generated or AI-manipulated content, up from 18% in 2021. — iProov Biometric Threat Intelligence Report, 2024

Detection Rates & Limitations

Humans correctly identify deepfake video only 24% of the time when shown high-quality synthetic content alongside real video, according to MIT Media Lab research — effectively worse than random chance for the best deepfakes. — MIT Media Lab, "Detecting Deepfake Videos," 2023
Microsoft's Video Authenticator tool achieves 70–87% detection accuracy depending on deepfake generation method — and accuracy degrades as newer generation models are released, requiring continuous model updates. — Microsoft AI for Good Lab, Video Authenticator Documentation, 2024
A 2023 Stanford Internet Observatory evaluation of six commercial deepfake detection tools found that no tool exceeded 80% accuracy across a diverse test set, and all tools showed significant blind spots for audio-only deepfakes. — Stanford Internet Observatory, "Evaluating Commercial Deepfake Detectors," 2023
NIST's Face Recognition Technology Evaluation (FRTE) found that state-of-the-art deepfake generation can defeat commercial facial recognition systems with a false acceptance rate of 4.6% — meaning approximately 1 in 22 deepfake authentication attempts succeeds against leading biometric systems. — NIST FRTE, 2023

Non-Consensual & Harassment Deepfakes

The number of non-consensual intimate deepfake images reported to the Internet Watch Foundation grew by over 400% between 2020 and 2023, with minors depicted in an increasing proportion of reported content. — Internet Watch Foundation, Annual Report, 2023
A 2023 survey by the Cyber Civil Rights Initiative found that 1 in 12 women in the United States has been targeted by non-consensual intimate deepfake imagery, with the majority of perpetrators being current or former intimate partners. — Cyber Civil Rights Initiative, "Non-Consensual Deepfake Survey," 2023

Law Enforcement Response

As of 2024, 47 U.S. states have enacted or are considering legislation specifically addressing deepfake fraud, non-consensual deepfake imagery, or AI-generated impersonation in financial crimes. — National Conference of State Legislatures (NCSL), AI Legislation Tracker, 2024
The EU AI Act, which came into force in August 2024, requires operators of deepfake generation systems to implement mandatory disclosure that synthetic content was AI-generated, with fines up to €30 million or 6% of annual turnover for violations. — European Parliament, EU AI Act (Regulation 2024/1689), 2024
Europol's 2024 IOCTA identified deepfake technology as a top-five emerging cybercrime threat requiring cross-border coordination, noting that most deepfake fraud infrastructure operates across multiple jurisdictions to avoid takedown. — Europol IOCTA, 2024
Cite This Page:

AIScamRecovery.com. "Deepfake Fraud Statistics 2026: Financial Losses, Detection Rates & Cases." April 2026. https://aiscamrecovery.com/stats/deepfake-fraud-statistics-2026

Frequently Asked Questions

How much money do deepfake scams cost businesses annually?

Deepfake-related fraud losses are estimated at over $25 billion globally by identity verification firm Sumsub. Individual incidents range from hundreds of thousands to tens of millions — the 2024 Hong Kong BEC deepfake attack cost one company $25 million in a single transaction. Corporate losses from deepfake-assisted KYC fraud and account takeover add significantly to the total.

How fast are deepfake fraud attacks growing?

Onfido reported a 3,000% increase in deepfake fraud attempts in 2023 vs. 2022. Sumsub found a 10-fold increase on its platform in the same period. Europol documented a 240% rise in deepfake fraud reports across EU member states. Growth is accelerating as generation costs collapse and open-source tools proliferate.

What industries are most targeted by deepfake fraud?

Financial services, cryptocurrency exchanges, and fintech companies face the highest attack volumes due to KYC requirements that deepfakes can spoof. Corporate finance departments are targeted for BEC wire fraud. Human resources teams are targeted by deepfake job applicants seeking system access. Legal and government sectors face deepfake impersonation for credential fraud.

Can deepfakes be detected reliably?

No current system achieves reliable detection. MIT Media Lab found humans correctly identify high-quality deepfakes only 24% of the time. Commercial detection tools top out at roughly 80% accuracy, and all degrade as generation models improve. NIST testing shows deepfakes defeat leading facial recognition systems in about 1 in 22 attempts. Multi-factor authentication and behavioral verification are currently the strongest defenses.

How do I report a deepfake fraud incident?

File with FBI IC3 at ic3.gov. For corporate BEC involving wire fraud, also notify FinCEN through your financial institution's Suspicious Activity Report (SAR) process. For non-consensual intimate deepfakes, contact the Cyber Civil Rights Initiative hotline (cybercivilrights.org) and your state attorney general. Europol accepts cross-border deepfake fraud referrals from EU member states through national law enforcement.