A mask can be funny, scary, fantastical, or eerily accurate. At Halloween, any one of these masks is acceptable - and fun, but when it comes to knowing who you are interacting with in the digital world - where masks can be swapped out or changed in an instant - and what they have access to, all masks must be off.
It is becoming increasingly difficult to tell who is or is not a bad actor. Suspicious characters can no longer be identified as those strictly sending out emails with false links and bad intentions. They are individuals who can, very skillfully, mask their true identity with someone else’s. Hackers are increasingly employing deepfakes, or lifelike manipulations of an assumed likeness or voice, to gain access to protected systems and information.
Deepfakes-as-a-service enable even less advanced fraud actors to near-flawlessly impersonate a target. This progression makes all kinds of fraud, from individual blackmail to defrauding entire corporations, significantly harder to detect and defend against. With the help of General Adversarial Networks (GANs), even a single image of an individual can be enough for fraudsters to produce a convincing deepfake of that individual.
While we’d like to think that user authentication tools can instantly spot a deepfake, that isn’t always the case. Given their rise in sophistication, certain forms of user authentication can be fooled by a competent deepfake fraudster. To better defend against deepfakes, organizations need to employ specialized AI tools to identify the subtle but telltale signs of a manipulated image or voice.
It’s time to unmask the fraudsters and reclaim our confidence in our user authentication tools and security protocols. As the fraudsters step up their game with deepfakes, it’s time the good guys did the same.