
Prakash Santhana Wed 19 Mar 2025
Collected at: https://eandt.theiet.org/2025/03/18/industry-insight-how-generative-ai-used-financial-fraud-detection
The fight against fraud is evolving into a digital arms race, where both criminals and financial institutions (FIs) are leveraging artificial intelligence in attempts to outmanoeuvre one another.
Fraudsters are using generative AI (GenAI) to create scams that are increasingly challenging to detect and disprove, while anti-fraud teams are deploying AI-powered solutions of their own to sniff out and prevent these threats.
In this escalating battle, FIs must adopt the tools of the enemy to stay ahead.
The rise of GenAI has transformed fraud from a game of pure deception into one of adaptation and counter-adaptation. Fraudsters are no longer limited to traditional scams – they can now generate highly realistic fake documents, clone voices, and even create convincing deepfake videos to exploit FIs and their customers.
This escalation could have presented a near insurmountable challenge, but (regrettably for the scammers), the disruptive potential of AI cuts both ways. Today, information security teams are harnessing AI to identify these novel threats in real time, deploying advanced fraud detection frameworks that continuously learn and evolve to keep up with this war of attrition.
But what role can AI truly play in fraud prevention? How are fraudsters leveraging GenAI to manipulate digital identities and deceive financial systems? More importantly, what strategies can FIs implement to counteract these emerging threats?
The growing threat of AI-driven fraud
Generative AI has given fraudsters powerful new tools to manipulate digital identities and bypass traditional security measures. Consumers are the most popular targets – spotting scams has become so difficult that falling for them is now a matter of when, not if , for even the most discerning of customers. In the UK alone, 9 million people are estimated to have fallen victim to financial fraud in the past year.
Even phishing emails, once riddled with obvious errors, are now nearly indistinguishable from legitimate corporate communications, making it easier for scammers to deceive customers.
Even for banks’ security teams, this new wave of technology means facing increasingly sophisticated fraud attempts. Factors like voice, face, and official documentation were treated as definitive just a few years ago, and today they are as reliable as a chocolate teapot.
Scammers can now forge official documents with remarkable accuracy, generate deepfake technology to impersonate individuals in video calls, and even accurately recreate users’ voices to trick FIs into approving fraudulent transactions.
But the enemy is not only at the gate – FIs must also contend with insider fraud.
Employees with privileged access may manipulate security policies to create loopholes or escalate their own permissions in ways that go undetected. Bad actors within the business can also leverage AI to identify weaknesses between security zones, allowing them to exploit gaps in policy enforcement.
As fraud becomes more complex, the challenge of negating becomes much greater for FIs. The old methods don’t hold up to the new threats, so they must find a way to evolve. The best path forward is to fight fire with fire: not just attempting to detect suspicious behaviour but to predict and prevent it before damage is done.
How financial institutions can leverage GenAI for fraud prevention
To combat these evolving threats, FIs are integrating GenAI into their fraud detection strategies. While GenAI have made fraud attempts more convincing and difficult to detect, the same technology can be used to prevent it.
One key area is enhancing authentication measures. AI-driven biometric verification and liveness checks ensure that identity documents and video submissions come from real individuals, not deepfakes. At the same time, AI-powered anomaly detection can flag unusual patterns in customer behaviour, helping to identify account takeovers or unauthorised access attempts.
For insider fraud, advanced AI solutions like Davies’ SPEAR (Secure Proactive Enhanced Access Review) provide a more resilient defence. SPEAR uses graph analytics to map out relationships between users, access points, and resources, identifying high-risk connections and policy violations.
By continuously analysing behavioural patterns and correlating them with stated security policies using Gen AI, the system can highlight discrepancies and prevent privilege misuse before it escalates into a security breach. Security teams can even simulate potential attack scenarios, stress-testing their systems against AI-driven threats to identify and reinforce weak points.
The future of fraud prevention
FIs that fail to evolve their fraud detection capabilities risk falling behind in an era of increasingly sophisticated threats. By embracing generative AI solutions, banks and financial organisations can proactively mitigate fraud risks, safeguard customer trust, and strengthen compliance frameworks.
As fraudsters continue to innovate, FIs must do the same – leveraging AI-driven solutions like SPEAR to stay ahead of threats and ensure a more secure financial ecosystem.
The battle for consumer security may be evolving, but by using the weapon of the enemy against them, security teams can give themselves the upper hand in keeping their customers safe.
Prakash Santhana is a partner at Davies. He has extensive consulting and operational experience in digital consumer authentication, payments fraud and blockchain.

Leave a Reply