KYC Wasn't Built for This: Lessons from the OPCOPRO Scam

by
Sam Chan
North Korea Blog Post Header

Workforce Impersonation Report

How AI-enabled impersonation is redefining identity security and shaping the future of enterprise trust.

In late 2025, Check Point researchers documented a sophisticated investment scam known as the OPCOPRO or “Truman Show” operation. Victims were drawn into private messaging groups, encouraged to download a mobile trading app, and asked to complete identity verification before depositing funds.

What made the OPCOPRO operation notable wasn’t simply that identity verification was present, but how it was used. Bad actors subverted the familiarity of Know Your Customer (KYC) compliance checks to help them build trust with their victims and steal highly sensitive data. The KYC check itself functioned as a fraudulent trust signal rather than a true safeguard, leading victims to hand over some of their sensitive identity data. 

On the one hand, the incident highlights a growing gap between identity verification designed for regulatory compliance and identity systems expected to operate safely in adversarial environments. But the deeper lesson from OPCOPRO is not about investment fraud –– it’s about identity systems themselves. The OPCOPRO operation shows how identity verification designed for regulatory compliance can be repurposed as an attack primitive, enabling the collection of legitimate identity artifacts that later create risk far beyond the original scam.

A Scam Built on Familiar Processes

Unlike traditional phishing campaigns, the OPCOPRO operation didn’t rely on urgency or technical exploits. Instead, it built credibility over time.

Victims encountered ads or messages impersonating financial institutions, then joined private WhatsApp or Telegram groups populated by convincing “experts” and peers. Daily market commentary and social proof created the appearance of a legitimate investment community.

Only after trust was established were users asked to download an app from an official app store and complete a familiar step: identity verification. Users uploaded government-issued IDs, submitted biometric selfies, and provided personal details — exactly what many legitimate financial services require during onboarding. In this context, the identity verification step didn’t raise suspicion. In fact, it reinforced the perception that the service was legitimate

From Consumer Exposure to Enterprise Risk

The consequences of OPCOPRO don’t end with the initial fraud. Although the scam primarily targeted individuals, the identity artifacts it collected create consequences that extend well beyond personal financial loss. When attackers obtain government-issued IDs and biometric artifacts in this way, those materials don’t remain confined to consumer fraud. The downstream effects of OPCOPRO will have a direct impact on internal enterprise cybersecurity.

Victims of OPCOPRO believed they were completing a one-time check for a trading app. In reality, they were exposing identity artifacts that can be used elsewhere for impersonation attacks, including SIM swaps, account recovery abuse, and wholesale synthetic identity creation — attacks directly affecting enterprise systems that still rely on weak or static identity checks.

Because the identity artifacts harvested by OPCOPRO are themselves valid, they can be replayed across other contexts that rely on government-issued documents or authoritative data sources. This does not imply compromise of government systems themselves, but rather reflects the downstream risk created when trusted documents are reused outside their original context.

From Bad to Worse: A Security Nightmare

Because the documents contain valid identity information, fraudsters can digitally manipulate the IDs to insert their own face and then inject them into the databases that KYC providers rely on as sources of truth. From there, it comes down to the KYC provider’s AI models trying to detect the generative AI models that performed the face swap. And as any defender knows, AI vs. AI is an arms race you’ll always lose. 

KYC providers know this, which is why they’re investing so heavily in developing better AI models. But the above scenario is just one example of how a bad actor could use the personal data harvested in scams like OPCOPRO. Even more terrifying, a bad actor could simply use their victim’s valid selfie alongside their valid ID. So long as the fraudster is able to successfully inject both pieces of media into the data stream, neither capture will contain any evidence of digital manipulation for the AI models to spot.

For enterprise security teams, the implications are dire. Numerous account management workflows — from hiring and onboarding through account recovery and helpdesk interactions — are all vulnerable. 

Imagine an attacker calling up your helpdesk, pretending to be an employee to socially engineer a password reset. Your helpdesk uses a KYC tool for identity verification, so the agent prompts the person to verify themselves. But the fraudster is using a stolen driver’s license, with their face on it, with information that matches the DMV records; maybe they’re even using a VPN so they appear as if they’re near the location on the ID. The KYC system returns a green check-mark, and all of the signals look good. The helpdesk agent completes the reset. And the fraudster is in.

This is not a hypothetical scenario, nor a hypothetical risk. 

What This Reframes About Identity Verification

The lesson from OPCOPRO isn’t that identity verification is inherently flawed. It’s that identity verification systems which place too much trust in database lookups, and which rely on AI models to detect AI manipulation, don’t provide nearly as much security as they claim. As impersonation becomes easier to scale and fraud schemes increasingly resemble legitimate businesses, organizations need to be clear about what their identity systems actually guarantee — and what they don’t.

If this incident raises questions about how impersonation exploits familiar verification workflows, our 2026 Workforce Impersonation Report examines this pattern in more detail, including how these attacks evolve and why traditional detection methods often miss them.

Secure your helpdesk against social engineering and impersonators.
Decline
Accept All Cookies