Online crime is growing - fast. We look at what are the growing trends around the world, pushing technology to keep evolving and fighting the good fight.
Patrick Johnson, August 20th, 2020
ShareLove this blog? Why not share it with the world?
In recent years, we have seen a serious increase in the number and severity of digital and online fraud instances. In the US alone, synthetic identity fraud is thought to be the fastest growing financial crime.
Statistics show that fraud is increasing significantly: P2P fraud saw a 733% increase from 2016 to 2019; account takeovers increased 72% from 2018 to 2019 and 2019 alone had 5,183 data breaches of 7.9 billion exposed records.
Predictions for online fraud were worrying enough at the beginning of the year but then COVID-19 hit and things seemed to get even worse. With vast amounts of people being driven online, an increasing number of opportunities have arisen for unscrupulous fraudsters to take advantage of them.
So, what is it that is causing the fraud trends at the moment? The explosion in the number of digital channels, computational power and advanced technology such as AI are all significant influences.
You would have thought that with all the incredible work being doing by companies like Veriff who are tirelessly working against fraud, fraudsters might be losing the battle. Unfortunately, as technology advancements are being achieved by the good guys, they are also being sought by the bad guys. It just shows how important it is for companies to keep on top of their digital security systems in order to protect both themselves and their customers.
Here is Veriff’s overview of 7 trends that are happening in online fraud right now:
There has recently been a substantial rise in new account fraud which is being enabled by a sudden increase in the creation of so-called ‘synthetic identities’. These identities are either faked, stolen, modified from real data or simply bought on the dark web.
Fraudsters use them to get into new bank accounts, wait until they have achieved maximum advantage and then simply ‘bust out’ either with data or financial gain. Another way these identities are being used is on online marketplaces that offer promotions to new customers. The synthetic identities are combined with stolen payment cards and then dozens of non-existent individuals are suddenly quid’s in.
Robust identity verification techniques are essential for companies if they are to strengthen their foundations against new account fraud. Processes that ensure that applicants have to put their real face forward (and provide sufficient evidence that they are who they say they are) when they are first creating their account are essential.
Research from the Fraud Trends and Tectonics white paper shows that instances of account takeover fraud (ATO) are on the rise. Part of the reason for this is that it is generally easier for fraudsters to interfere with an existing account than to open a new one and, often, the payoff of doing so can be much quicker.
Fraudsters doing this rely on the fact that an established and trusted relationship between a service provider and customer may well be subject to less rigorous anti-fraud processes.
ATOs are certainly not new but they are on the rise and it is likely we will see an increase in the number of attacks and the various way they are carried out. It is a worrying fact that although many companies spend millions preventing other types of fraud, ATOs are not taken as seriously as they perhaps should be.
Increasingly sophisticated tools that allow fraudsters to ‘scrape for ATOs’ or bypass 2FA processes to access login and password data are exacerbating the situation.
Phishing has always been the most common type of cyber-attack and it preys on people’s vulnerability. Traditionally, phishing has involved a fraudster recreating a legitimate-looking website or email domain of a trusted company and then sending links to malware via email which unsuspecting people are tricked into downloading.
More recently phishing has evolved, and it now takes place via SMS (smishing), or by voicemail (vishing) when the Interactive Voice Response (IVR) system of a renowned company is copied and recreated.
Vishing can be particularly upsetting and disturbing as the victim often can feel violated that they were tricked by such an authentic sounding voice.
Phishing is still the main cause for data breaches and the trend here is that fraudsters and hackers are getting bolder and utilizing more complex techniques than ever before.
A few years ago, it was thought that the war had been won against social engineering scams. However, as banks and other organizations employ more robust anti-fraud technology, fraudsters are instead turning their attention to an easier target – the customers.
This means that social engineering, whereby criminals attempt to deceive people in order to manipulate them into divulging their personally identifiable information (PII), is once again on the rise.
The most worrying thing about this trend is that these scam artists do not have to be expert coders or sophisticated hackers in order to fool their victims. In effect, anyone could try this type of scam which is why we all need to be vigilant against it.
Authorized Push Payment (APP) scams are expected to rise in frequency in the coming months and years. This is when a victim inadvertently authorizes a payment into an account that they wrongly believe to be legitimate.
One of the factors behind why this type of scam is increasing is the roll-out of so-called ‘faster payment’ processes by banks across the globe. Although it creates a customer-centric and user-friendly service, it does have its negative side. This is because it means that fraudsters can access and steal money in real-time and ‘get away’ before any duplicitous activity is detected let alone prevented.
Businesses and services have been driven online in recent months due to widespread social distancing and COVID-19 government regulations causing people to stay at home.
With such a large number of transactions having been shifted online so quickly, companies have been disrupted at a scale not seen before.
This means that the opportunities for fraudsters to take advantage of new – and potentially poorly designed – digital platforms have increased tenfold. In particular, the telecommunications, retail and financial services industries have been especially affected.
According to TransUnion, the percent of suspected fraudulent digital transactions rose 5% from March 11 to April 28 when compared to January 1 to March 10, 2020. In addition to this, more than 100 million risky transactions from March 11 to April 28 were identified.
This situation is still unfolding but it is clear that the businesses that will survive this period of turbulence are the ones that will leverage the most advanced and robust fraud prevention tools.
Deepfakes have garnered widespread attention recently and a trend is emerging where they are increasing in prevalence.
Deepfakes are synthetic media whereby an image or video of a person is replaced with the likeness of someone else. Faking content in this way is not new but the technology being used to create it has now become very complex. Techniques are being leveraged from machine learning and artificial intelligence, the result of which is high quality audio and visual content that is very successful in deceiving people.
It is being used by scammers as a way to bypass and overcome facial recognition or voice biometric protocols put in place by financial institutions and other organizations.
As an example, a notorious deepfake took place last year. A UK-based energy firm was attacked when an executive was duped into believing he was following instructions from his CEO but actually transferred $243,000 to a fraudulent account.
Deepfakes have caused growing worries about how new technology is being increasingly used to undermine trust, empower fraudsters and make traditional communication streams much more vulnerable to attack.