Skip to content

Is open-source AI a good or bad thing for the finance sector?


If you work in the tech, financial, or FinTech sector, you will undoubtedly have seen many social posts and articles on the AI phenomenon ChatGPT. ChatGPT is just one example, albeit a high-profile one, that uses an algorithm for training based on reinforcement learning from human feedback (RLHF). This same algorithm is being used to generate open source implementations, opening up the use of open-source AI to all.

ChatGPT is an AI-enabled chatbot model generated by researchers at OpenAI in November 2022. Since then, this interface for OpenAI’s Large Language Model (LLM) has exploded, with new ways of using it regularly popping up on social timelines. Significantly, OpenAI has had multiple billions of dollars of investment from Microsoft. The significance of this will become evident as you read on.

ChatGPT, and similar interfaces that use implementations based on the underlying algorithm RLHF, offer exciting and worrying futures for the world of finance. Here, Eastnets looks at the paradigm of ChatGPT and open-source AI technologies and how we can apply them for good (and bad) in Anti-Financial Crime.

Open-source AI for Good

Open-source AI supplied by organizations such as OpenAI can be a game-changer in the financial sector. AI already sees significant successes in helping to detect complex money laundering fraud; the vast scale of modern financial fraud requires the capabilities of AI and machine learning (ML) to provide the superior, real-time flexibility that is missing from traditional rules-based fraud detection. Machine learning is capable of continuous learning as data sets are fed into the ML algorithm. With the vast amounts of fraud being committed, these data sets swiftly change, and as they do, the ML algorithm responds and optimizes the fraud detection efforts. Open-source AI is likely to offer even further use cases in the financial sector; Microsoft recognizes the potential, investing large sums into OpenAI.

 

These use cases include the application of AI in Anti-Financial Crime; one example is the use of AI-enabled automated systems that use a form of “Investigator Chatbox” based on conversational AI; An Investigator Chatbox would be able to work alongside banks during fraud investigations, helping to put together compliance documentation, for example, SARs (Suspicious Activity Reports): folks in the tech sector are already exploring the use of ChatGPT to develop privacy and security policies, regulatory compliance and investigation help is the next step. Importantly, open-source NLP (natural language processing) can provide an Investigator Chatbox service across many languages, offering a way to handle cross-border compliance.

Another area that an open-source AI interface could deliver innovation in Anti-Financial Crime is to use of Open-Source Intelligence (OSINT) to detect financial scams from unstructured data. An AI interface could be used to look for patterns and trends in data across time, using AI to mine publicly available data for insights that can be used to drive decisions and mitigate risks. For example, open-source AI could be trained to use social media post data to look for patterns in the engagement of money mules over time to see how recruitment patterns and language change to evade detection.

ChatGPT and OpenAI are likely to take this concept to a higher level with the significant investment into the technology by Microsoft; the potential to utilize this technology for good in the industry could be a positive disruptor. Unfortunately, however, fraudsters have also set their sights on open-source AI.

Open-source AI for Bad

The financial sector has to deal with mountains of fraud; PwC’s Global Economic Crime and Fraud Survey 2022 found that over half (51%) of organizations had suffered from financial fraud in the previous two years. PwC has coined the term “platform fraud” to explain how financial crime tactics are expanding and evolving, with cybercriminals innovating around platforms that handle financial and communication transactions. The PwC report concludes, "Platform fraud has become a criminal enterprise – reinforcing the imperative for organizational resilience and the right tools to reduce your risk.” 

ChatGPT and similar AI-enabled interfaces, often open-source in nature, are entering the fraud space by offering novel ways to innovate. Innovation is as essential in the world of fraud as it is in business. Industry research from the December 2022 Nilson Report shows that card fraud is expected to decrease with fraud figures of 6.6 cents globally per $100 in total volume, compared to 6.8 cents in 2020. This is important because cybercriminals will not give up on fraud. This reduction in fraud points to the use of powerful AI-enabled tools to help fight fraud; these tools look like they are working, which is excellent news. However, in the war of attrition between fraudsters and organizations, cybercriminals innovate to move the chess piece to a new position. This is where the nefarious use of AI comes in, and ChatGPT and, in particular, open-source AI will undoubtedly give cybercriminals a new tool to commit fraud. And this is already happening:

Open-source AI and financial fraud and scams

An advisory issued by Check Point Research (CPR) in January outlines how the cybercriminal community is using ChatGPT. The researchers explain how ChatGPT can enable several scenarios, including those used to commit financial fraud or as part of a financial fraud attack chain. The researchers point out that discussions on dark web forums have already begun, with cybercriminals brainstorming ways to use ChatGPT to commit fraud. Community discussions around the exploitation of ChatGPT will undoubtedly generate ideas such as using ChatGPT to execute social engineering. The interface can be used to perform various tasks and provide the tools for the fraudsters to up their game; two examples include:

Phishing emails: a paper entitled "Creatively malicious prompt engineering" by Andrew Patel and Jason Sattler looked at the use of ChatGPT for generating phishing and spear phishing emails. The researchers used GPT to create believable and powerful spear-phishing emails. However, one of the serious concerns is that ChatGPT did this so well that it opened the door for many more cybercriminals whose language skills would otherwise have precluded them from creating sophisticated spear-phishing campaigns.

Verifying customers and onboarding: eKYC processes are urgently needed in the financial sector to speed up onboarding and improve the customer experience. One of the ways that open-source AI could perpetuate KYC fraud is through the generation of fake identity documents. DALL-E is another OpenAI interface that generates images and could be used to create fake identity documents. These fake identity documents, such as passports and driver's licenses, could potentially add a new layer to synthetic identity fraud. The anti-fraud industry, fortunately, tends to stay ahead of faked ID documents, using techniques such as liveliness checks when documents are presented online. However, the level of identity document verification is crucial in preventing eKYC fraud, and the use of automated AI-driven open-source could mean that as volumes of fraud increase, fraud detection becomes more complicated.

Anti-Financial Crime and open-source AI wars

Open-source AI has entered the war of attrition between fraudsters and Anti-Financial Crime fighters that has always created ongoing friction. Open-source AI systems, including PaLM from Google, offer the potential for cybercriminals to increase the pressure on Anti-Financial Crime systems. The financial sector must fight fire with fire. Innovations are coming, and AI-enabled anti-fraud, an established method to fight financial fraud, is adding to its armory. Eastnets is working to bring exciting new Large Language Model interfaces to the industry that will take on any nefarious use of open-source AI tools.

 

Subscribe to our newsletter

Get all of our latest news and developments to your inbox