Why this report matters
In the UK, identity verification is no longer just a routine compliance requirement; it must be understood as a critical component of digital infrastructure. As AI-generated content becomes indistinguishable from reality, relying on manual visual inspection increases exposure to fraud and impersonation attacks.
awareness of the term “deepfake” in the UK, the highest among all analysed global markets.
mean detection score for UK respondents—only a tiny fraction better than a coin flip where 0 is random guessing
of Britons cite personal fraud and impersonation scams as their top concern regarding synthetic media.
of UK respondents admit they do not try to verify suspicious content, the highest rate of non-verification across all surveyed markets.
Get the Veriff Deepfakes Report 2026
Reinforce your fraud prevention strategy with data-driven insights into how 1,000 UK respondents interact with synthetic media. Learn why seeing is no longer believing in the UK market.
Is the Deepfakes Report 2026 free to access?
Yes, the full report is available for free download to help organizations improve their fraud prevention strategies.
What is the current state of deepfake detection accuracy in the UK?
British respondents achieved a mean detection score of 0.07. This indicates that while they perform slightly better than a “coin flip” (0.0), their ability to distinguish deepfakes from reality is almost entirely based on guessing.
How does deepfake awareness in the UK compare to other markets?
The UK leads globally in conceptual familiarity; 74% of UK adults are familiar with the term “deepfake,” outperforming Brazil at 67% and the US at 63%. However, this high awareness does not translate to better detection, creating a “false sense of security”.
What defines a "high-risk" user in the context of AI fraud?
A “high-risk” user—accounting for approximately 7% of the market—is someone who demonstrates low detection accuracy, expresses high confidence in their abilities, and rarely or never verifies suspicious content.
Why is manual identity review becoming ineffective against deepfakes?
Modern AI can replicate visual cues like skin texture and facial movements so accurately that human intuition and visual inspection are no longer reliable safeguards.
Manual review relies on the human eye, which is no longer a reliable line of defense because modern AI can replicate visual cues—like skin texture and facial expressions—with high accuracy. Because human detection is now close to random, businesses that rely on manual judgment are inheriting vulnerabilities directly.