Report

Veriff deepfakes detection UK 2026

Can humans still tell real from fake? Veriff partnered with Kantar to test the ability of 1,000 UK adults to detect deepfakes, and the results challenge everything we assume about awareness of AI-generated visuals.

Key insights from Veriff Deepfakes Report 2026 UK:

  • 74% know what deepfakes are, yet perform barely above chance at spotting them
  • Fake videos were the hardest to detect, often mistaken for authentic content
  • 22% of Britons never verify suspicious content, the highest rate globally

As deepfake threats grow, synthetic identities multiply, and identity verification solutions become essential to digital trust.

Why this report matters

In the UK, identity verification is no longer just a routine compliance requirement; it must be understood as a critical component of digital infrastructure. As AI-generated content becomes indistinguishable from reality, relying on manual visual inspection increases exposure to fraud and impersonation attacks.

0 %

awareness of the term “deepfake” in the UK, the highest among all analysed global markets.

0

mean detection score for UK respondents—only a tiny fraction better than a coin flip where 0 is random guessing

0 %

of Britons cite personal fraud and impersonation scams as their top concern regarding synthetic media.

0 %

of UK respondents admit they do not try to verify suspicious content, the highest rate of non-verification across all surveyed markets.

What you’ll learn in this report

icon-card
Deepfake detection accuracy in the UK

Why theoretical knowledge creates a false sense of security while actual detection accuracy remains barely above a coin toss.

icon-card
Identity fraud and synthetic media risks

How synthetic identities and deepfake videos are being deployed to bypass verification checks and open fraudulent accounts.

icon-card
Human vs AI detection limits

Why experience creating AI visuals provides only a marginal 5% increase in accurately identifying fake media.

icon-card
Future of identity verification

Why UK businesses must shift toward AI-powered biometric authentication to detect synthetic media at the point of interaction.

Get the Veriff Deepfakes Report 2026

Reinforce your fraud prevention strategy with data-driven insights into how 1,000 UK respondents interact with synthetic media. Learn why seeing is no longer believing in the UK market.

CCPA/CPRA
Compliant
GDPR EU
Compliant
SOC2 - TYPE II
Certified
ISO/IEC 27001:2022
Certified
UK Cyber Essentials
Certified
ISO/IEC 30107-3
Level 1
ISO/IEC 30107-3
Level 2
UKDIATF
Fido certified
arrow
arrow
icon

Is the Deepfakes Report 2026 free to access?

Yes, the full report is available for free download to help organizations improve their fraud prevention strategies.

icon

What is the current state of deepfake detection accuracy in the UK?

British respondents achieved a mean detection score of 0.07. This indicates that while they perform slightly better than a “coin flip” (0.0), their ability to distinguish deepfakes from reality is almost entirely based on guessing.

icon

How does deepfake awareness in the UK compare to other markets?

The UK leads globally in conceptual familiarity; 74% of UK adults are familiar with the term “deepfake,” outperforming Brazil at 67% and the US at 63%. However, this high awareness does not translate to better detection, creating a “false sense of security”.

icon

What defines a "high-risk" user in the context of AI fraud?

A “high-risk” user—accounting for approximately 7% of the market—is someone who demonstrates low detection accuracy, expresses high confidence in their abilities, and rarely or never verifies suspicious content.

icon

Why is manual identity review becoming ineffective against deepfakes?

Modern AI can replicate visual cues like skin texture and facial movements so accurately that human intuition and visual inspection are no longer reliable safeguards.
Manual review relies on the human eye, which is no longer a reliable line of defense because modern AI can replicate visual cues—like skin texture and facial expressions—with high accuracy. Because human detection is now close to random, businesses that rely on manual judgment are inheriting vulnerabilities directly.