FinCEN Alert: Deepfake Attacks on Financial Institutions

by
Nametag Team
Nametag console showing a successful verification result

Enable Self-Service Account Recovery

Nametag sends MFA and password resets to self-service while protecting your helpdesk against social engineering.

The U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) recently issued a FinCEN Alert on fraud schemes involving #deepfake media targeting financial institutions. FinCEN warns that bad actors are weaponizing generative AI to fool outdated customer identification, identity verification, and customer due diligence (CDD) systems. Here’s what you need to know.

Key Insights from FIN-2024-Alert004: FinCEN Alert's on Deepfake Attacks:

  • Low barrier to entry: Generative AI tools make it easy for anyone to produce high-quality synthetic content that is extremely difficult (read: nigh-impossible) to distinguish from reality. The barrier to entry to fraud has never been lower.
  • How fraudsters use deepfakes: Fraudsters create IDs, photographs and videos to trick customer verification and due diligence systems, socially engineer someone, or combine them with stolen PII to create synthetic identities.
  • Mitigations are critical: FinCEN includes numerous red flags to look for and ways to detect deepfake identity documents and other synthetic media, but many of the methods they recommend are themselves vulnerable to injection attacks and other threats.
  • Costs and impacts: In addition to financial losses stemming from fraud incidents, institutions face compliance challenges. FinCEN also asks that institutions include the key term “FIN-2024-DEEPFAKEFRAUD” in Suspicious Activity Report (SAR) field 2 “Filing Institutions Note to FinCEN”.

Deepfake Red Flags and Remediation Best Practices: Key Considerations

FinCEN includes nine “red flags” to help financial institutions handle the use of genAI tools by bad actors. These flags are a good starting point, but difficult to act on by themselves. Therefore, FinCEN also offers some recommendations for mitigating deepfake threats. Specifically, they call out phishing-resistant MFA and live verification checks which verify someone through audio or video. 

Both are better than nothing, of course, but have important drawbacks because they contain key vulnerabilities or gaps.

  • Phishing-resistant MFA is a general best-practice, but attackers can bypass it by exploiting the MFA recovery process, effectively creating a “back door.” 
  • As FinCEN points out, fraudsters can simply respond to liveness checks using deepfake audio or video media––putting the burden on humans to spot it.

We’ve written extensively on the weakness of visual verification calls in the era of generative AI and deepfakes. So we’re glad to see FinCEN talk about using third-party identity verification providers to identify potential deepfakes. However, their point of view demands some clarification.

First, FinCEN writes, “…identity verification solutions may also use more technically sophisticated techniques to identify potential deepfakes, such as examining an image’s metadata or using software designed to detect possible deepfakes or specific manipulations.” 

Metadata analysis can be a powerful tool for detecting fraud, but fraudsters can just as easily add fake metadata to an image as they can create that fake image in the first place. Software that solely relies on detecting possible deepfakes or specific manipulations has a similar weakness. Nametag’s Cryptographic Attestation™ technology, on the other hand, ensures data integrity from start to finish.

Deepfake Attacks: How they Work and How to Stop Them

Next, FinCEN points out that, “Some identity verification solutions may also flag possible attempts to circumvent verification checks, such as the use of third-party webcam plugins, which can let a customer display previously generated video rather than live video.” 

FinCEn is spot-on that any identity verification system which uses a person’s web browser is vulnerable to webcam emulators. But the approach FinCEN describes relies on detecting an injection/emulation––and by the time you detect it, it’s already too late. In addition, modern deepfake tools don’t have to play a “previously generated” video, they can stream a live video deepfake in real time.

Ultimately, this is a core vulnerability of any identity verification tool with an in-browser user experience. Because they can’t use Cryptographic Attestation to prevent injection attacks and ensure data integrity, they’ll always be vulnerable to digital injection attacks and webcam emulators.

Digital Injection Attacks: How They Work and How to Prevent Them

Where to Learn More About FinCEN’s Alert Regarding Deepfake Attacks on Financial Institutions

Secure your helpdesk against social engineering and impersonators.
Decline
Accept All Cookies