
Key Points
Background
Biometric identification – in particular, facial recognition – has become a popular and widely practiced mechanism for modern digital identity and "know your customer" (KYC) authentication across a broad range of sectors, including finance, telecommunications, healthcare and enterprise security. Such identification systems typically involve matching a live scan of individuals’ faces with an official ID and ensuring that they pass a “liveliness” check through a series of movement tests.
However, the rapid proliferation of generative AI has significantly altered the threat landscape, introducing a rising threat of impersonation through the use of publicly available face-swapping and synthetic media tools that are now able to generate realistic images and videos that mimic human appearance and behavior in real time. In the last year, AI tools have improved markedly, with offerings introduced in late 2025 and early 2026 able to bypass the typical tells of AI-generated content, such as abnormal eye and lip movements and out-of-sync audio, making it easier for AI to impersonate the targeted person. At the same time, such tools have also become cheaper and more accessible. Some are available for as little as $15 a month for either legitimate use or on dark web marketplaces. For example, one AI-powered image generator, OnlyFake, is available for only $15 and is able to produce highly realistic synthetic IDs that were reportedly able to bypass the identity verification system of a major global cryptocurrency exchange, according 404Media. Other synthetic identity kits or deepfake-as-a-service subscriptions are available for monthly subscriptions ranging from $10 to $50 from providers like Darkpaint, Shawtyclub and Rysuca.

Impact
Individuals and organizations continuing to rely on biometric identifiers introduce a range of risks, including impersonation that facilitates fraud or reputational damage for individuals, organization-wide risks from efforts to bypass onboarding and KYC checks and impersonation of employees or executives, which can result in data theft or monetary theft. Such identifiers are also increasingly used to facilitate physical access to office buildings and other company sites, meaning that live deepfake tools that successfully bypass these systems can also cause physical security risks. On a more systemic level, the growing ability of threat actors to bypass biometric identity verification using face-swapping and deepfake technologies significantly undermines trust in digital authentication systems that many organizations rely on for employee access and KYC verification. Beyond potential breaches of KYC requirements, from a regulatory and compliance perspective, eroding trust in biometric identification exposes organizations to heightened scrutiny from regulators, particularly in sectors where strong identity assurance is mandated.
For onboarding in particular, face-swapping technology can bypass liveness checks and enable fraudsters to impersonate would-be employees to gain insider access to facilitate theft of proprietary information or personal data that can be sold on dark-web marketplaces for financial gain. This risk is particularly salient as threat actors like North Korean attackers impersonating IT workers and West African Yahoo Boys have already been documented leveraging such tools to enable sensitive access or scams. One such incident occurred in 2024, when KnowBe4 unintentionally hired a North Korean hacker who used live deepfakes to pass four different video interviews, as well as the company’s internal identity verification and background check process. Such tools can also be used to impersonate specific individuals, like executives and existing employees, to gain access to corporate networks or specific applications relying on biometric identification, like banking apps to facilitate large-scale wire fraud, or email addresses that could house sensitive communications.
The ability to bypass liveness checks also has important implications for KYC processes in financial services, as it can complicate compliance for organizations’ efforts to prevent illicit activities like money laundering, fraud or terrorist financing. This means that threat actors would be better positioned to overcome identity checks and carry out fraud and more seamlessly onboard onto banking or fintech platforms to carry out illicit activities. Illustrating risks, a November 2024 analysis from Group-IB discovered over 1,100 fraud attempts against a prominent Indonesian financial institution which attempted to use deepfakes to bypass KYC processes.
How Companies Can Prepare