Skip to content

Will KYC become obsolete in the AI era?


Generative artificial intelligence (AI) is reshaping our world, moving beyond its initial harmless applications to power more sinister activities. Its ability to create convincing fake content is enabling criminals to exploit personal information obtained from the vast numbers of internet-connected people for illicit purposes. In response, businesses bolster defences and often rely on Know Your Customer (KYC) as an essential step in fraud prevention. While advances like biometrics and facial recognition have undoubtedly improved identity verification, KYC isn’t foolproof. Criminals are good at finding these weaknesses and slipping past checks.

Now, generative AI is upping the stakes, making it increasingly more difficult to tell real customers from fake ones. It brings on a new threat that can outsmart existing security measures1. This leap in AI capability and availability is a turning point for KYC practices, which must evolve to counter the new wave of tech-enabled deceit.

The dual threats intensifying

One of the major concerns for the future of KYC is the alarming rise of deepfakes. These deceptive tools are now a universal problem, with an estimated 500,000 video and voice deepfakes shared on social media sites globally in 20232. While this technique may seem cutting-edge, it is not so new.

In 2019, a British CEO transferred $243,000 to a fraudster3. The scammer tricked the CEO into thinking he was talking to the head of the company’s parent organisation by using a “deepfake” that simulated the voice of the boss.

They are cheap to make, too. A new tool from TenCent creates a three-minute deepfake video for around $1454. With such tools readily available, KYC fraud is increasingly being used to facilitate organised crime, such as money laundering and terrorist financing. If fake ID documents and associated verification elements can be created so cheaply and simply, the method will quickly become a favourite tactic for cybercriminals.

Synthetic IDs are another concern. They are created by mixing real identifiers, like stolen Social Security Numbers, with fake names. Synthetic identities, even conventional, non-AI-based, are hard to detect. Estimates from Reuters show a staggering 95% of synthetic identities used for KYC are not detected during onboarding5.

AI-enabled fraud leaves KYC at serious risk. Once the process is broken, money laundering and terrorist financing can soar. Smaller cybercrimes could too. This is because hackers often sell their successful methods as ready-to-use services.

If this happens with deepfakes for KYC, then banks, FinTechs, and eCommerce will find it increasingly difficult to confidently verify who their customers really are.

The weak points of KYC

KYC has always been a risky business. As the first step in establishing trust between a financial institution and an individual, it acts as a gateway for all subsequent interactions. If fraudsters can compromise this entry point, they can move through the system unnoticed, making it a natural target for their efforts.

As the KYC process has evolved, so have the methods of deception. The market for facial recognition deployments is expected to reach $19.3 bn by 20326. This reflects how facial recognition is becoming a standard KYC practice in many onboarding processes. For example, challenger bank apps may require facial recognition during the account setup.

Deepfakes can trick KYC processes used by FinTech vendors such as payment apps. Even processes that require liveliness checks can be spoofed by deepfake videos.

Similarly, identity documents have long been a vulnerability in the KYC process. In 2003, the FBI warned about issuing driver’s licenses without due diligence and verification7. Fast forward to a 2022 report from Onfido that found a shift from synthetic ID to more than 90% of ID fraud based on a “complete reproduction of an original document”8.

With the advent of AI, synthetic IDs are becoming more sophisticated. Generative AI can now create highly realistic-looking identity documents, complete with features traditionally used for authentication, such as hallmarks.

An integrated view to fight deepfake KYC

Generative AI-enabled synthetic ID will be extremely difficult to detect unless the verification is recurrent and tied to behaviour monitoring. Effective identification is no longer a one-off check. It requires ongoing careful checking of transactions and behaviour.

Financial criminals are no strangers to using novel methods to avoid checks. When a new technology, a new payment or a new way of doing business appears, they find ways to abuse it. Deepfake KYC is just another method that will be used to carry out financial crimes.

Deepfakes in KYC can create a profoundly complex threat that can hide serious financial crimes like money laundering. These intricate crimes often involve many steps. We need AI and machine learning to follow these crime chains as they link up.

Fighting this level of intelligent fraud requires the use of intelligent technologies across the many layers of human interactions and payment systems. The detection and prevention of challenging deepfake KYC fraud must use an integrated view of financial crime. A single one-shot solution that detects a deepfake video cannot fix this issue alone.

Real-time monitoring has become vital to fight financial crime in the age of widespread AI. It will take more than just knowing that an image or voice is fake to stop fraudsters in their tracks.

In short, AI won’t make KYC obsolete, but only if financial institutions act now to protect themselves.

The author, Rasha Abdel Jalil, Principal Product Development Manager at Eastnets.

 

 

 

Subscribe to our newsletter

Get all of our latest news and developments to your inbox