Why this report matters
In the US, identity verification is no longer just compliance, it is a core security layer. As deepfake technology becomes more realistic, relying on manual checks increases exposure to fraud, onboarding risk, and impersonation attacks.
growth in AI-generated or altered media presented during verifications in the last year
detection score for US respondents—only slightly better than a coin flip (where 1 is perfect accuracy)
awareness of the term “deepfake” in the US
of Americans cite fraud and impersonation scams as their top concern around synthetic media
Get the Veriff Deepfakes Report 2026
Reinforce your fraud prevention strategy with data-driven insights into how 1,000 US respondents interact with synthetic media. Learn why seeing is no longer believing in the US market.
Is the Deepfakes Report 2026 free to access?
Yes, the full report is available for free download to help organizations improve their fraud prevention strategies.
What is the current state of deepfake detection accuracy?
Research shows that human detection accuracy is currently at “chance level,” meaning the average person is essentially guessing when identifying deepfakes.
How does deepfake awareness in the US compare to other markets?
The US currently shows lower awareness (63%) and familiarity with deepfakes compared to the UK (74%) and Brazil (67%).
What defines a "high-risk" user in the context of AI fraud?
High-risk users are those who demonstrate low detection accuracy while maintaining high confidence in their ability to spot manipulated media.
Why is manual identity review becoming ineffective against deepfakes?
Modern AI can replicate visual cues like skin texture and facial movements so accurately that human intuition and visual inspection are no longer reliable safeguards.