Podcast
The fraud landscape in 2026: polygamous working, AI, and a spike in online fraud
Rachael Tiffen’s remit as Director of Public Sector and Learning at Cifas includes overseeing the Cifas Fraud and Cyber Academy. She spoke to us about evolving fraud threats, how criminals are using AI and what the rise of polygamous working means.
Rachael Tiffen is Director of Public Sector and Learning at Cifas, the UK’s largest not-for-profit fraud prevention service, which shares data and intelligence to learn how to fight fraud. She joined us to discuss the critical steps required to address the evolving fraud landscape in 2026, the ways AI has amplified fraud threats, and the key areas employers must focus on to stay ahead.
Rapid responses are needed
People are finding new ways to commit fraud – but fraud is also being taken much more seriously, and responses are evolving rapidly, too.
What would also be hugely beneficial is if the government was to set clear priorities and objectives for the industry. The UK’s first ever fraud minister has been appointed, which is a massive step – and a sensible next one would be cross-sector data and intelligence sharing.
The Economic Crime and Corporate Transparency Act came into play in 2023. This was then followed by the Failure to Prevent Fraud Offence, which came into effect in September 2025. Finally, but crucially, the Public Authorities (Fraud, Error and Recovery) Act is currently going through parliament – and this will empower local authorities to take action against fraud.
Mandatory identity verification
The rapid evolution of fraud tactics, insider threats and cross-sector targeting means that today, everyone – individuals and organisations alike – have to be both agile and proactive. Polygamous working – whereby an employee secretly holds down multiple full-time jobs simultaneously – means that everybody needs to be ready to act, data needs to be robust, and intelligence sharing across sectors needs to be drastically improved.
As of November 2025, Companies House has enforced mandatory identity verification for directors – new and existing – and for persons with significant control. The aim here is to strengthen the trust in the UK’s corporate ecosystem and reduce fraud by ensuring that the individuals behind companies are exactly who they say they are.
AI has made fraud faster, slicker, and much harder to detect. It has supercharged the fraud threat – bringing with it a 20% year-on-year increase in synthetic voice fraud and a 26% increase in total fraud attempts, according to the 2026 Veriff Fraud Report – and this demands urgent action.
Explore key fraud statistics, regulatory shifts, and actionable recommendations from Veriff’s new report.
Scalable and believable
We have to move quickly because criminals are using AI at pace and with ease. They’re also generating high-quality fake documentation, impersonating brands, authorities and individuals in highly convincing ways.
What’s more, we’ve seen fraudsters use AI to automate social engineering tactics – making scams more scalable and believable. And the tools that allow them to do this can bypass traditional verification systems and exploit all sorts of vulnerabilities in digital platforms.
Fraud is essentially becoming a service industry – with criminals selling AI-powered toolkits on the dark web and making it easy for anyone to launch a convincing scam. But as much as AI is part of the problem, it’s also part of the solution.
Personal protection
People also need to think about looking after themselves. Fraudsters are using deep fake technology to take advantage of individuals by pretending that they’re real people. Some victims have been led to believe that they’re in long-term relationships, when their ‘partner’ who doesn’t even exist. And of course, the end game here is financial gain. Vast amounts of money, sometimes hundreds of thousands of pounds, are being stolen from the victims – leaving them emotionally and psychologically damaged.
Today, people are going as far as selling their own identities, as well as company log ins, for financial gain. This can be done willingly, or through coercion. But by doing so, people are exposing themselves to identity loss and many other liabilities.
This is sometimes linked to money muling – which can be brought on by economic hardship caused by the cost-of-living crisis, simply by people being unhappy at work or even just by a lack of awareness.
Money muling can result in long-term consequences, including criminal records, being blacklisted from financial services or loss of employment opportunities.
The rise and rise of polygamous working
Working multiple jobs secretly and using fraudulent reference houses is a real emerging threat – both in the private and public sectors.
People may be working a couple of different contracts at the same time, and there have also been instances of people working up to 14 at once. To do so means concealing conflict of interest or income sources, and protagonists could also be providing false employment histories and CVs.
In one case, an individual had four jobs – one in the Home Office, one at Defra, one at the Department of Health, and one at an unnamed civil service organisation – all at the same time. He’s been charged with nine counts of fraud over a three-year period and is awaiting sentencing.
Greater checks are needed
This shows how important it is to verify people before you employ them. HR often do their own checks, of course, but there’s more detailed due diligence that can be carried out – looking at, for example, if people have the qualifications they say they do, and if everything on their CV is correct.
Imagine, for example, that a social worker tells you that they have a certain qualification, but they don’t. They could be sent into people’s homes. So you’ve not just employed the wrong person, but you’re letting down people who are in genuine need of support. A mistake like this could have huge repercussions.