Possible rewrites: 1. “KYC Redundant? How Gen AI Is Transforming Identity Verification” 2. “The Demise of KYC? Exploring the Impact of Gen AI on Customer Identification” 3. “Gen AI’s Rise: What It Means for KYC’s

Not uncommonly, KYC authentication involves “ID images,” or cross-checked selfies used to confirm a person is who they say they are. There’s no evidence that gen AI tools have been used to fool a real KYC system — yet. But the ease with which relatively convincing deepfaked ID images is cause for alarm. Feeding deepfaked KYC images to an app is even easier than creating them. The takeaway is that KYC, which was already hit-or-miss, could soon become effectively useless as a security measure.

KYC, short for “Know Your Customer,” is a process designed to assist financial institutions, fintech startups, and banks in verifying the identity of their customers. As part of this process, some organizations rely on “ID images” or self-portraits that are cross-checked in order to confirm a person’s true identity. Some of the well-known companies implementing this technology include Wise, Revolut, and cryptocurrency platforms Gemini and LiteBit.

However, with the rise of generative AI, doubts have been cast on the effectiveness of these ID image checks.

Recently, there have been viral posts on X (formerly Twitter) and Reddit that demonstrate how an attacker can exploit open source and off-the-shelf generative AI tools to manipulate a person’s selfie and use it to pass a KYC verification. While there is no evidence that these AI tools have been used to deceive a real KYC system, the potential they hold for creating convincing false ID images is a cause for concern.

Fooling KYC

In a typical KYC ID image authentication, a customer submits a photo of themselves holding a valid ID document, such as a passport or driver’s license, to prove their identity. The submitted image is then cross-checked with other documents and selfies on file in an effort to prevent any cases of impersonation.

But this method has never been completely foolproof. In fact, for years, fraudsters have been selling fake IDs and selfies. Now, with the advancement of generative AI, new possibilities have emerged.

Tutorials and demonstrations online show how tools like Stable Diffusion, a free and open source image generator, can be used to create realistic synthetic images of a person in various settings, such as a living room. With some trial and error, an attacker can manipulate these images to make it seem like the person is holding an ID document. They can then use any image editing software to insert a real or fake document into the manipulated image.

“Now, when we can no longer trust our eyes to determine if content is genuine, we will have to rely on applied cryptography,” said Justin Leroux, a researcher, who shared a Reddit “verification post” and a deepfake ID image created with Stable Diffusion on his Twitter account.

Creating these convincing deepfake ID images requires installing additional tools and extensions and obtaining around a dozen images of the target. According to a Reddit user who goes by the username “_harsh_,” it takes about one to two days to create a convincing image.

But the good news for attackers is that the barrier to entry is lower than it used to be. In the past, creating realistic ID images with accurate lighting, shadows, and environments required advanced knowledge of photo editing software. Today, this is no longer the case.

Fending off the threat

As if creating deepfake ID images weren’t enough, the next step, using them to trick applications and platforms is even easier. On a desktop emulator like Bluestacks, Android apps can be deceived into accepting deepfake images instead of a live camera feed. Similarly, web apps can be fooled by software that allows users to turn any image or video source into a virtual webcam.

In response, some apps and platforms have implemented “liveness” checks as an additional security measure. These checks involve the user performing actions, such as turning their head or blinking their eyes, in a short video to prove they are a real person.

However, even these liveness checks can be bypassed with the use of generative AI.

According to the cybersecurity firm Sensity, 10 of the most popular biometric KYC providers are vulnerable to real-time deepfake attacks, which could put many banks, insurance companies, and healthcare providers at risk. Check out their full report here.

Last year, Jimmy Su, the chief security officer for cryptocurrency exchange Binance, told Cointelegraph that current deepfake tools have advanced enough to pass liveness checks, even those that require real-time actions from the user.

The bottom line is that KYC, which has always had its flaws, may soon become obsolete as a security measure. While Su believes it is not yet possible for deepfake images and videos to fool human reviewers, it is only a matter of time before technology catches up and finds a way around it.

Avatar photo
Max Chen

Max Chen is an AI expert and journalist with a focus on the ethical and societal implications of emerging technologies. He has a background in computer science and is known for his clear and concise writing on complex technical topics. He has also written extensively on the potential risks and benefits of AI, and is a frequent speaker on the subject at industry conferences and events.

Articles: 865

Leave a Reply

Your email address will not be published. Required fields are marked *