KYC, short for “Know Your Customer,” is a process designed to assist financial institutions, fintech startups, and banks in verifying the identity of their customers. As part of this process, some organizations rely on “ID images” or self-portraits that are cross-checked in order to confirm a person’s true identity. Some of the well-known companies implementing this technology include Wise, Revolut, and cryptocurrency platforms Gemini and LiteBit.
However, with the rise of generative AI, doubts have been cast on the effectiveness of these ID image checks.
Recently, there have been viral posts on X (formerly Twitter) and Reddit that demonstrate how an attacker can exploit open source and off-the-shelf generative AI tools to manipulate a person’s selfie and use it to pass a KYC verification. While there is no evidence that these AI tools have been used to deceive a real KYC system, the potential they hold for creating convincing false ID images is a cause for concern.
Fooling KYC
In a typical KYC ID image authentication, a customer submits a photo of themselves holding a valid ID document, such as a passport or driver’s license, to prove their identity. The submitted image is then cross-checked with other documents and selfies on file in an effort to prevent any cases of impersonation.
But this method has never been completely foolproof. In fact, for years, fraudsters have been selling fake IDs and selfies. Now, with the advancement of generative AI, new possibilities have emerged.
Tutorials and demonstrations online show how tools like Stable Diffusion, a free and open source image generator, can be used to create realistic synthetic images of a person in various settings, such as a living room. With some trial and error, an attacker can manipulate these images to make it seem like the person is holding an ID document. They can then use any image editing software to insert a real or fake document into the manipulated image.
“Now, when we can no longer trust our eyes to determine if content is genuine, we will have to rely on applied cryptography,” said Justin Leroux, a researcher, who shared a Reddit “verification post” and a deepfake ID image created with Stable Diffusion on his Twitter account.
Creating these convincing deepfake ID images requires installing additional tools and extensions and obtaining around a dozen images of the target. According to a Reddit user who goes by the username “_harsh_,” it takes about one to two days to create a convincing image.
But the good news for attackers is that the barrier to entry is lower than it used to be. In the past, creating realistic ID images with accurate lighting, shadows, and environments required advanced knowledge of photo editing software. Today, this is no longer the case.
Fending off the threat
As if creating deepfake ID images weren’t enough, the next step, using them to trick applications and platforms is even easier. On a desktop emulator like Bluestacks, Android apps can be deceived into accepting deepfake images instead of a live camera feed. Similarly, web apps can be fooled by software that allows users to turn any image or video source into a virtual webcam.
In response, some apps and platforms have implemented “liveness” checks as an additional security measure. These checks involve the user performing actions, such as turning their head or blinking their eyes, in a short video to prove they are a real person.
However, even these liveness checks can be bypassed with the use of generative AI.
According to the cybersecurity firm Sensity, 10 of the most popular biometric KYC providers are vulnerable to real-time deepfake attacks, which could put many banks, insurance companies, and healthcare providers at risk. Check out their full report here.
Last year, Jimmy Su, the chief security officer for cryptocurrency exchange Binance, told Cointelegraph that current deepfake tools have advanced enough to pass liveness checks, even those that require real-time actions from the user.
The bottom line is that KYC, which has always had its flaws, may soon become obsolete as a security measure. While Su believes it is not yet possible for deepfake images and videos to fool human reviewers, it is only a matter of time before technology catches up and finds a way around it.