Harnessing the transformative power of AI in financial services
From caution around its adoption to the importance of ethics, global fintech influencer Dr. Ruth Wandhöfer shares her thoughts on artificial intelligence within banking.
Dr. Ruth Wandhöfer’s passion is the digital financial ecosystems of the future. We asked her why cyber resilience should be “everyone’s business”, and what steps organisations can take to futureproof themselves in this space. Here’s what she told us:
AI should be used holistically within organisations
Cyber AI should be used as a life survival toolkit – but many argue that what we currently have in place just doesn’t cut it.
We are very much in a digital world, so everything must be digitally safe. When people become aware of a problem, they need to report it.
Far too often, people aren’t made aware of an incident until it’s far too late. This includes ICT incidents, data privacy protection breaches and over reliance on very large providers. This is in fact what happened with the Microsoft patching issue. It wasn’t a cyber-attack per se, but it demonstrated third-party operational dependency.
Cyber resilience is key – as is understanding your dependencies, managing your suppliers and having a backup opportunity in place in case something disastrous happens.
Caution is important when it comes to AI adoption
Historically, we haven’t really had AI experts within financial institutions. We’ve had chief information security officers for a while – but only now are we seeing roles like AI officers and AI strategists popping up.
Understanding the core of this technology, and how different segments are maturing, is critical to understanding what’s out there in the market.
We’ve had a lot of new ‘hypes’ in the past 10 years – from fintech to blockchain and then regtech. Today, everything is labelled ‘artificial intelligence’. But a lot of what is called AI shouldn’t be because it’s actually much closer to simple automation.
What companies need to do is understand the maturity of their technology and also understand their own problem statements. What are they planning to use AI for? What are their competitors doing? They shouldn’t be fooled simply by marketing and shiny tools.
Cyber resilience is key – as is understanding your dependencies, managing your suppliers and having a backup opportunity in place in case something disastrous happens.
Dr. Ruth WandhöferGlobal FinTech influencer
Proactive vs reactive – what we can learn from regtech
After the fintech wave, regtech came to the fore. This is because we saw so much regulatory change post-financial crisis, and it was impossible for everything to be done manually.
Money was thrown at the problem. This was especially the case when it came to AML fines and other big tickets that had been handed out to banks by regulators around the globe. But doing so wasn’t going to solve the issue.
Regtech’s origins can be attributed to technology innovation being more readily available and people in the banking industry thinking about how to fix problems more proactively.
And today, this approach – as opposed to being reactive – is something that is very slowly and very gradually starting to speed up and gain a little bit of traction within FS.
Unlocking the Future of Identity Verification
Don’t miss out — tune in now!
Dive into our latest podcast episode where industry experts break down the evolving landscape of digital identity, fraud prevention, and compliance in a hyper-digital world.
Companies need to be able to see their own threats and the type of threats that they might be faced with. This will involve looking at your third-party ecosystem, including who you are connected with. For some organisations, this could be 1,000-plus vendors.
Guidelines in this space used to be really hard to adhere to because so many people had third parties that they were paying, without really knowing why.
The key here is understanding who your third parties are and having oversight of how close they are to you, as well as how critical they are.
If, for example, all of your programmes and all your internal software run on Microsoft – that is a third party for you and you need to know what purpose they are serving. Also crucial is a focus on security patches, and what risks they pose.
The ethical use of AI
When using AI, it’s important to keep monitoring how solutions work for you and what any potential consequences might be.
What’s also imperative is human overwrite and seeing things from an ethical perspective. AI doesn’t see everything, and it can make inferences based on what it thinks is right. And what we know from some of the evidence with large language models is that it can get things totally wrong.
This is where your own ethical standards, policies and culture will come in. What’s key isn’t revenue generation, cost-cutting or profit for profit’s sake. It’s about doing the correct thing for your customer.
If you’re using AI, scrutinising all the choices you’re making and having fundamental ethical principles anchored into your organisation becomes more important than ever.