A deepfake attack is when a bad actor uses AI-generated or manipulated images, video, or audio to scam an individual or fool an identity verification system.
As generative AI (genAI) technologies grow more and more sophisticated, deepfakes are a “superpower” for fraudsters and hackers around the world. Deepfake attacks surged 3,000% in 2023, with high-profile deepfake scams coming to the fore in 2024. Companies are quickly realizing that traditional defenses––including know your customer (KYC) identity verification and employee training––are insufficient to prevent deepfake attacks.
This article explains how deepfake attacks work; why most identity verification (IDV) products are vulnerable to deepfakes; and how Nametag prevents deepfake attacks.
What is a Deepfake Attack? How Do Bad Actors Actually Use Deepfakes?
Key takeaway: Deepfake attacks are when bad actors use AI-generated or manipulated content to scam a person or fool an identity verification system. The reason deepfakes are such a threat is that they’re now virtually impossible for humans to identify; extremely difficult to detect when inserted via digital injection attack; and can be very hard to catch when used in presentation attacks against unprepared identity verification systems.
Before we discuss how to stop deepfake attacks, it’s important to understand how bad actors actually use them to scam people or beat identity verification systems.
Personal scams
Fraudsters use AI-generated likenesses to scam individuals into giving up their personal information or money. For example, fraudsters have for decades targeted elderly victims, pretending to be a loved one who’s been arrested and needs money for a lawyer or for bail. Deepfakes supercharge this scam by making it virtually impossible for someone to distinguish their real loved one from a scammer. Research from 2021, when genAI was still in its infancy, already showed that people cannot detect deepfakes but think they can.
Presentation attacks
Presentation attacks use an identity verification (IDV) system’s prescribed capture process (e.g. webcam or phone camera) to present false information. For example, fraudsters can fool many IDV systems by taking a photo of a fraudulent ID document, or holding up the camera in front of another screen which is displaying the deepfake document.
Injection attacks
An injection attack bypasses an IDV system’s capture process to insert a false image or video. For example, malicious software can masquerade as a webcam input to a web browser to insert deepfake photos or videos into the data stream received by an identity verification system. Injection attacks are especially hard to detect, and injection attacks targeting identity verification providers increased 255% in 2023.
Learn more about digital injection attacks →
Recent Deepfake Attacks Examples
The threat of deepfakes is real, urgent, and growing: deepfake "face swap" attacks on identity verification systems increased 704% in 2023, according to some research. Here are just a few examples of recent deepfake attacks:
- November 15, 2023: Onfido’s 2024 Identity Fraud Report found a 3,000% increase in deepfake attacks in 2023 compared to 2022, ascribed to the increasing availability of user-friendly generative AI tools. Within months, Onfido's IDV is beaten by deepfake drivers' licenses.
- February 04, 2024: A multinational engineering firm lost $25 million to hackers who used live deepfake video calls to socially engineer a finance employee.
- February 05, 2024: Investigative journalists at 404 Media published a deep-dive into a website called OnlyFakes. For only $15, the authors received an image of an AI-generated ID that fooled identity verification at OKX, a cryptocurrency exchange.
- April 12, 2024: Password manager LastPass revealed that it narrowly thwarted a deepfake audio attack which attempted to impersonate CEO Karim Toubba.
- May 13, 2024: The CEO of advertising giant WPP said that hackers tried to scam WPP executives using deepfake videos created from YouTube footage and a voice clone.
- May 30, 2024: A Hong Kong employee was tricked into wiring away HK$4 million (~$500k USD) by a scammer using a deepfake video to pose as their company’s CFO.
- June 11, 2024: Attackers once again fooled OKX's identity verification provider, this time stealing $11 million from an investor.
Why Most Identity Verification Products are Vulnerable to Deepfake Attacks
Key takeaway: Most identity verification tools are vulnerable to emerging threats like AI-generated deepfakes and digital injection attacks because they’re built for Know Your Customer (KYC) compliance, not for security. This means that they simply cannot provide the level of assurance required for high-risk functions like account recovery.
High-profile deepfake attacks have cast the growing threat of genAI and deepfakes into sharp relief. In the wake of these stories, some organizations have begun questioning whether we’re witnessing the end of biometrics and document-based identity verification. Indeed, most IDV tools are vulnerable to deepfakes. But this pessimistic view is typically based on a misunderstanding of the threat.
Most identity verification products are vulnerable to attacks which leverage AI-generated deepfakes for the following reasons:
Document uploads via insecure channels
Many identity verification products are “re-purposed” Know Your Customer tools. KYC is a very specific set of regulatory requirements that certain businesses (e.g. financial institutions) must meet at specific times, such as new customer account opening. KYC products are built to meet these requirements, not to ensure high security and assurance.
For example, KYC products often allow users to upload files such as ID documents or proof of address. This opens the door for attackers to upload deepfake documents instead.
“The images are so good that 404 Media was able to get past the KYC measures of OKX, a cryptocurrency exchange that uses the third-party verification service Jumio to verify its customers’ documents.” –– Decrypt.co, People Are Using Basic AI to Bypass KYC —But Should You?
Liveness detection
KYC products often rely on AI-powered, video-based liveness detection to spot potential deepfakes. Instead of taking a static selfie, they require users to record a short video following directions to move their head or make certain facial expressions. In these cases, proprietary AI models predict whether the video is legitimate or not.
But the reality is that genAI tools have gotten so good, it’s now simple for bad actors to create highly realistic deepfake videos of victims smiling, frowning, or moving their head, using only a few images gathered from public sources like social media. Threat actors have already been observed using these tools to fool popular KYC identity verification providers.
Digital injection attacks
Many identity verification products use web browsers, webcams, and/or unverified mobile applications to capture ID documents and selfies. Unfortunately, these channels are highly vulnerable to injection attacks that insert AI-generated deepfakes in order to trick the system into falsely verifying a bad actor.
- Webcam emulators: Attackers can easily connect a third-party data source that presents itself as a webcam, but is actually a feed for deepfake video (think about how easy it is to change your video source on a Zoom call).
- Main-in-the-middle: Attackers can easily manipulate the flow of data between a browser and the server to insert deepfake documents, selfies, or videos in a way that’s virtually impossible to detect on the server’s side.
- App hijacking: IDV providers that don’t leverage the cryptographic security of Apple and Android devices in a very particular way can be fooled by sophisticated threat actors who decompile and modify the provider’s mobile app. Once they’ve decompiled the app, they can then feed deepfake video streams into it.
Learn more about digital injection attacks →
How to Stop Deepfake Attacks
Key takeaway: Deepfakes are extremely difficult for humans to detect, but there are some basic steps you can take to try to spot them. When it comes to identity verification products, don’t fall victim to promises of “AI detection tools”; choose a product that takes the approach of blocking injection attacks and detecting presentation attacks.
Even though humans think we're good at spotting deepfakes, research shows that in reality we're not. But there are a few things people can do to defend against deepfake attacks:
- Be skeptical: always be aware of the potential for deepfakes, and be extremely skeptical when looking at documents or selfies, or when listening to audio including phone calls. If you didn’t actively ask for the content (e.g. asking someone to call you, or asking them to send you an image or video), be highly suspicious by default.
- Pay attention to the details: To detect deepfake images, MIT researchers recommend paying close attention to things like eyebrows, cheeks, facial hair, and glasses, and reflections. To spot deepfake audio or video, pay attention to blinking, lip movements, and background noise, or ask the person to do something unexpected like making a funny face or saying something very specific in a weird tone of voice.
- Use technology: At the end of the day, generative AI tools and deepfakes have gotten so good that today, humans can’t trust ourselves to reliably detect them. Thankfully, technology can: a number of companies have released “deepfake detector” tools hat directly analyze image/multimedia input, though these are inherently on the back foot for the reasons described above. Other companies, like Nametag, focus on detecting deepfake attacks through cryptographic security.
How Nametag Stops Deepfake Attacks
Nametag uses a unique combination of AI, cryptographic security and biometrics to detect deepfake attacks. Without revealing too much, here's an overview of our approach. If you'd like to learn more, don't hesitate to get in touch.
Explore a Nametag verification →
Mobile cryptography: Nametag leverages the cryptographic security of native Android and iOS apps to create a chain of trust which enables us to detect injection attacks by telling us whether the data we receive is coming from a trusted source: our unmodified mobile app.
Holistic analysis: Where other providers look at static images of ID documents, our approach adds more layers of protection against injection and presentation attacks. Once we’ve verified the data we receive, we analyze it using a variety of proprietary techniques.
Document and selfie authenticity: Nametag’s proprietary AI models are trained to evaluate document and selfie authenticity. When available, our models evaluate depth maps and other data that other providers can’t access because they use web browsers, webcams, and other capture methods which don’t collect this information.
Learn more about how Nametag stops deepfake attacks →
Conclusion
As generative AI and deepfake technologies continue to simultaneously grow more sophisticated and user-friendly, bad actors will only become more adept in their use of these tools. Independent research and high-profile news stories have already shown how traditional identity verification tools are vulnerable to this threat. Why? Because these products are built for Know Your Customer compliance, not for security––leaving substantial blind spots in their defenses that criminals are exploiting.
Nametag stands out as the first and only identity verification platform that is truly secure against both deepfakes themselves and the digital injection attacks through which deepfakes are weaponized. Only Nametag verification can deliver the levels of security and assurance required for ultra-high risk functions like self-service MFA resets.
Meanwhile, for IT and support tickets that can’t go to self-service, only Nametag offers an out-of-the-box console for helpdesk agents to quickly and securely verify whomever they’re talking with, eliminating the dual-threat of social engineering combined with deepfakes.