How will generative AI models, such as Chat GPT, impact compliance risk solutions in the fight against the rising tide of financial crime?
The financial sector is in choppy waters as bank collapse events, like SVB, wobble the industry and financial crime wreaks havoc: just to put this latter issue into perspective, according to the United Nations Office on Drugs and Crime (UNODC), 3.6% of global GDP, approximately $1.6 trillion, is laundered.
As a result, Regulations and laws covering financial crime are becoming increasingly stringent in tackling money laundering and fraud. Adding complexity to this heady mix of sophisticated cybercrime tactics is generative AI (Artificial Intelligence); these AI models are used to generate new audio, video, text, and even source code. Expressions of generative AI-powered apps, such as ChatGPT and Google’s Bard, have hyped up the AI landscape; but are these generative AI models a fraud threat, or could they be an important part of compliance risk solutions?
The risks of AI-enabled chatbots on compliance
The landscape of AI-enabled chatbots, like ChatGPT and Bard, is evolving fast. Generative AI and large language models (LMMs) are being developed worldwide; new versions with expanded capabilities appear regularly. Moreover, it is not just text that these AI-enabled systems can process. Associated technologies, such as Deepfakes, are already becoming productized for wide-scale use; Tencent Cloud, the Chinese mega-company, has recently released a Deepfakes-as-a-Service (DFaaS) offering for $145 per video. The implications of this DFaaS technology on areas from KYC to extortion are concerning. Financial crime risks that can come from AI-enabled chatbots include the following:
Generative AI models can trawl through user accounts, perform scams and phishing, and access sensitive data more quickly. AI models can be used to generate fake identity documents, fake images, and even fake videos that all look real. These data and documents can then be used to create synthetic identities that can pass KYC/CDD checks. As fraudsters access more personal data and generate more believable ID documents, the AI models become more accurate and the scam more successful. The ease of developing believable identities means that fraudsters can create scalable identity-related scams with high success rates. These scams then feed into our next area:
Generative AI models can be applied across the money laundering chain to help make money laundering more difficult to detect and successful. For example, fake companies could be created that are then used to aid in fund blending. AI could make the generation of fake invoices and fake transaction records simpler for fraudsters and make them more believable. AI-enabled KYC/CDD could generate off-shore accounts that can hide the beneficial owners behind the money laundering schemes. The generation of false financial statements will be simple for generative AI models. Loopholes in legislation could be identified by LMMs and used to move money across jurisdictions. The list goes on.
What is certain is that if it can be imagined, generative AI can be used.
All the above opportunities for fraudsters created by generative AI models such as ChatGPT impact regulations and compliance. Financial institutions must turn to AI models to fight fire with fire.
The opportunities of AI in beating financial crime and meeting compliance
AI is a double-edged sword, fraudsters may be exploring how to use generative AI models for bad, but the future of fraud detection and prevention lies in the good that AI can do.
AI is now recognized as an essential tool in the fight against financial crime. For example, the US Financial Crimes Enforcement Network (FinCEN) called out the need for AI and machine learning back in 2018, stating that AI and ML could help:
“better manage money laundering and terrorist financing risks while reducing the cost of compliance.”
The Financial Action Task Force (FATF) in its report “Opportunities and challenges of new technologies for AML/CFT” says this about AI:
“The increased use of digital solutions for AML/CFT based on artificial intelligence (AI) and its different subsets (machine learning, natural language processing) can potentially help to better identify risks and respond to, communicate, and monitor suspicious activity.”
The FATF report goes on to state:
“Transaction monitoring using AI and machine learning tools may allow regulated entities to carry out traditional functions with greater speed, accuracy and efficiency.”
When FATF survey respondents asked about the main use of new technologies in AML/CFT the following results were obtained:
A final word on the role of AI in financial compliance
The regulations that tackle financial crime and fraud have entered uncharted territory with the rise of generative AI models including ChatGPT. The regulations have set ever-more stringent requirements to close the doors on increasingly sophisticated financial crime opened by this new threat. However, legacy financial crime compliance risk solutions cannot keep up with the exacting measures required to beat financial crime. Fortunately, the double-edged sword of AI also offers an opportunity to detect and prevent insidious AI-enabled threats. AI-enabled AML and fraud prevention solutions provide the mechanism for financial institutions to meet regulatory requirements and maintain compliance. However, AI must be explainable to ensure transparency; knowing the reasoning behind decisions enables the audits and reporting needed for regulatory compliance. With a sophisticated approach to the role of AI in meeting financial regulations, compliance can be future-proofed.
Eastnets growing capabilities in AI in combatting financial crime.
At Eastnets our industry award winning products of SafeWatch Screening, SafeWatch AML and PaymentGuard embraces the latest AI modelling techniques to improve the quality of detections, reduce false positives and provide new contextual insights of behaviors.