Veriff
LibrarypodcastsThree themes to look out for in AI in 2024

Three themes to look out for in AI in 2024

Artificial intelligence has been one of the most discussed topics in our Veriff Voices podcasts in 2023.

Header image
Author
Chris Hooper
Director of Content at Veriff.com
December 20, 2023
Podcast
Share:
On this page
Dealing with bias
The AI arms race between criminals and gatekeepers
AI’s impact on software engineering

Our conversations with internal and external experts highlighted three key issues that will be important as AI heads further into the mainstream in 2024.

1. Dealing with bias

Unintended bias can be a significant problem in automated processes that incorporate AI and machine learning. Fortunately, it’s one that can be effectively addressed with the help of focused human intervention. 

The first step in reducing bias is understanding where it comes from. Bias exists in the real world, so it inevitably ends up in data, as Liisi German, Senior Product Manager at Veriff, points out.

‘Data reflects how we see the world around us,’ comments Liisi. ‘If all of us think about a wedding dress, then probably everybody in Western cultures thinks about a white dress. But Asian people might not think about a white dress.’ 

AI is programmed to learn to discriminate between data supplied by humans, using rules created by humans. As a result, bias inevitably creeps in. Due to the nature of the process, bias can be progressively amplified by machine learning in a kind of feedback loop if not addressed. The results can be unpredictable, and often undesirable, as evidenced by well publicised examples even when using the latest and most advanced generative AI models.

Veriff is constantly exploring ways to address bias in identity verification, both by improving AI algorithms and through human intervention.

‘For example, we have people from different countries and ethnicities working in the team, annotating the data,’ says Liisi. ‘But also, we’re measuring how biased we are, how our performance differs on different genders and races.’

‘You can definitely remove bias,’ says Suvrat Joshi, Veriff’s Senior Vice President of Product. ‘I think that achieving a perfect model output all the time or over a period of time is hard, but it's never impossible – and it's a great thing to aim for.’

When biases are identified, they can be addressed through a process known as reinforced learning through human feedback, or RLHF for short (see our recent article on hot topics in online fraud and identity verification for more on how this works).

Suvrat sees RLHF as a great way of helping address concerns about the use of AI for both businesses wanting to employ the most advanced identity verification techniques and their end customers.

‘Augmentation is always needed,’ says Suvrat. ‘And it's continuous learning, which allows the model to either stay on point or be improved over time. I think that's an essential part of building confidence – tuning and improving so that we provide our customers with that reassurance that it's not just something running on autopilot.’

Due to the nature of the process, bias can be progressively amplified by machine learning in a kind of feedback loop if not addressed.

2. The AI arms race between criminals and gatekeepers

Artificial intelligence undoubtedly promises to deliver significant benefits in the financial services sector’s drive to address fraud. Unfortunately, it’s also an innovation that’s being rapidly and enthusiastically embraced by criminals, as Kathryn Sharpe, Head of Financial Crime Product at banking as a service platform Griffin, explains.

‘I think generative AI is going to be really powerful (for bad actors), and it's already been really powerful – for creating deep fakes, and for generating texts to create a bunch of applications really quickly.’

Gatekeepers are inevitably finding themselves in something of an arms race with fraudsters, who have the advantage of being much freer to “move fast and break things”.

‘Whilst we're going to have an ability to use AI in a much more meaningful way in terms of spotting patterns and being both proactive and reactive, we're going to have to put in changes much more slowly, probably, than criminals are able to,’ says Kathryn. 

Kathryn says there needs to be a trade-off between ensuring financial services businesses and other gatekeepers are using AI safely and allowing enough freedom for them to be able to keep pace with criminals.

At the same time, for businesses there’s always a trade-off to be made between minimising risk and avoiding excessive friction.

‘As your business changes, as the fraud landscape changes, as the customer landscape changes, you'll want to be able to swing that pendulum back and forth,’ says David Divitt, Veriff’s Senior Director of Fraud Prevention and Experience. 

Fortunately, effective anti-fraud processes can be virtually invisible to the end user.

‘You can do a lot by looking at the ways customers interact with your services,’ comments David. ‘If you can gather all that data and make better decisions with it, you can do that without the customer really ever knowing that it's happening.’

"Whilst we're going to have an ability to use AI in a much more meaningful way in terms of spotting patterns and being both proactive and reactive, we're going to have to put in changes much more slowly, probably, than criminals are able to [...]"

Kathryn Sharpe, Head of Financial Crime Product, Griffin

3. AI’s impact on software engineering

The potential applications for generative artificial intelligence extend into virtually every discipline, and software engineering is no exception. At Veriff, we’re already using AI-assisted solutions to generate some of our code. Generative AI has access to the reams of technical documentation available on the internet, and can use this to help integrate with even the most arcane API. 

However, Hubert Behaghel, Veriff’s Senior Vice President of Engineering, sees AI as a tool for engineers, rather than as a replacement for them.

‘It’s shifting the work, but it hasn’t replaced the entire role,’ says Hubert. ‘I don’t envisage, at least for the next few years, that we’ll no longer need an iOS engineer, or a data scientist…’

Instead, Hubert thinks that AI will increasingly fulfil the function of specialist, thanks to its ability to rapidly access and synthesise knowledge about the complexities of specific elements of the software stack. This will accelerate the return of software engineers to a more holistic role, a trend already seen with the rising popularity of ‘T-shaped’ engineers (individuals possessing both specialist skills and the ability to link into other disciplines).

‘When I was younger, we didn’t have front end and back end, we just did everything,’ says Hubert. ‘I wonder if the level of complexity of the stack we have now will be something one person can handle again, because of the assistance we get from AI.’ 

Veriff Voices

Explore the full library of Veriff Voices podcast episodes here.

Get the latest from Veriff. Subscribe to our newsletter.

Veriff will only use the information you provide to share blog updates.

You can unsubscribe at any time. Read our privacy terms