Rebuilding Enterprise Trust in the Age of AI

by
Nametag Team
North Korea Blog Post Header
Nametag console showing a successful verification result

Workforce Identity Verification

Prevent breaches and reduce IT support costs with ready-to-use solutions built on Deepfake Defense™ identity verification and turnkey IAM integrations.

Enterprise security teams have spent years getting buy-in and rolling out phishing-resistant MFA across their organization. But now, that work is being undermined. The most secure MFA factor in the world means little if a bad actor can engineer a reset by impersonating an employee, or enroll MFA themselves by impersonating a legitimate new hire.

The fact is, generative AI has changed the game for attackers and defenders alike. Tools like Sora 2 can instantly make anyone look like anyone else, making impersonation attacks more scalable and more effective than ever before. Deepfake-related fraud has nearly tripled in the past year, while 62% of organizations report being subject of a deepfake attack in the past 12 months. 

Defenders need to understand and account for AI-wielding imposters when building risk matrices and planning their identity security strategies for 2026..

Keep reading to learn why the AI trust gap  gap is now the biggest risk to enterprise identity security, and what it will take to close that gap.

Key Takeaways

  • Generative AI has made impersonation a foundational operational risk.
  • Traditional identity tools and MFA factors can’t confirm who is actually behind a request.
  • Identity verification is the next evolution of workforce identity security.
  • Not all identity verification tools can provide workforce-grade assurance.

Generative AI Wields Power and Risk for the Workforce

When OpenAI introduced Sora, the internet was amazed. Type a few words into a description box and you get an image or video that looks real. Sora 2 is even more convincing. It’s impressive. It’s also dangerous.

The same technology that powers creative storytelling can also generate synthetic people who look and sound real. For attackers, that capability turns AI into a weapon. They can create convincing fake identities and use them to manipulate, deceive, or exploit trust. Meanwhile, the content guardrails in place around Sora 2 are proving easy to bypass.

For security and IT teams, this creates a different kind of problem. Every video call, chat, and support ticket becomes suspect. What used to be a routine process is now a potential breach. Helpdesks can spend more time verifying who they’re talking to than they spend less time resolving tickets. 

How Generative AI Transforms Impersonation

For years, workforce identity security has been based on phishing awareness, MFA adoption, and endpoint protection. But that model assumed that we could tell what was fake. Generative AI broke that assumption.

Attackers can now weaponize trust itself. They can appear in a live video call as a trusted colleague, sound authentic on a support call, or send a message that looks like it came from leadership. Each interaction feels legitimate until it isn’t.

The result is a persistent gap. You can know that a credential or token is valid, but you cannot know that it is being used by the right person.

This is a new kind of threat that lives between people, systems, and trust. It targets workflows built on trust such as password resets, access approvals, device enrollment, and onboarding–– the moments when security depends most on human judgment. Security professionals need a new foundation of identity security to recognize when the person on the other side of an interaction isn’t who they claim to be.

Why Remote Work Makes Impersonation Practical and Detection Impractical

Most enterprise interactions now happen through screens. Support requests, access approvals, and onboarding all take place through chat, email, or video. This can be vastly more efficient than requiring in-person interactions for everything, but it also creates a trust problem.

When the people you work with exist mostly as profile pictures and video tiles, authenticity becomes harder to prove. Attackers understand this better than anyone, and they see it as an opening.

Increasingly, we're seeing bad actors use generative AI to mask their true appearance, sound, and writing style in order to request help, get access, or gain approval. Sometimes, imposters copy familiar names, faces, and voices. Other times, they invent entirely fake personas, as in the case of North Korean IT workers.

To the person on the other end of a support ticket or IT onboarding workflow, everything looks normal. The process feels routine. Until it isn’t.

These deepfake impersonation attacks aren't theoretical. They’re already happening, en masse, to organizations and individuals all around the world.

Traditional Security Controls Aren't Built to Stop AI-Driven Impersonation

Companies are doing what they can to combat basic impersonation attacks. IAM, SSO, MFA, and device posture checks are essential. But MFA verifies access, not authenticity, and no single MFA factor can be fully trusted.

MFA Factor What it Verifies How it's Vulnerable
SMS passcode That the user has access to text messages sent to a particular phone number. Interception, SIM swaps, phishing, social engineering.
Email link That the user has access to a particular email address Interception, forwarding, phishing, social engineering.
Push notification / authenticator app That the user has access to a device on which they've enrolled an authenticator app. Push fatigue attacks, social engineering, phishing.
Passkey (device-bound) That the user has access to the device on which they've enrolled a particular passkey. Downgrade attacks via the passkey recovery process.
Passkey (synced) That the user has access to a device on which they've enrolled or transferred a particular passkey. Downgrade attacks via the passkey recovery process or passkey transfer process.

Video checks and “camera on” policies can help prevent deepfake impersonation, but they no longer provide a sufficient level of assurance for workforce security. Live video deepfake filters are available on GitHub. Major tech publications are warning that “Deepfake Scams Are Distorting Reality Itself.” The oft-cited story of a finance employee being fooled into wiring away $25 million is just the tip of the iceberg; one recent report warned that cases of AI-powered video calls rose 118% in 2024 

Enterprises need a trustworthy, scalable risk mitigation model that accounts for modern deepfake threats and verifies the actual person behind each account or action. Until then, deepfake impersonation will remain an easy way to bypass other security controls.

How Verifying Humans Closes the Enterprise Impersonation Gap

If impersonation succeeds because our standard models for authenticating users really just verify ownership or access to credentials and devices, the fix is clear: start verifying the actual human behind those credentials or devices.

Start by mapping the moments where trust matters most: account recoveries, helpdesk tickets, privileged access requests, and new worker onboarding. These are the points where impersonation attacks can have the most impact. Then, integrate identity verification systems that authenticate the actual human being involved, not just their device or credentials.

When integrated into workforce processes and IAM, ITSM, and HRIS systems, workforce identity verification providers like Nametag can tie a real, verified human to actions like password resets and access grants in a way that is far more trustworthy than traditional authentication methods.

It’s not a replacement for IAM; it's the layer that protects IAM against advanced impersonation threats.

This layered approach gives enterprises what they have always needed; a reliable way to know who is on the other side of a device or session, stopping imposters without slowing real work.

What it Takes to Rebuild Trust at Work

Generative AI broke online trust. Human identity verification is rebuilding it.

The internet runs on trust, and that trust is breaking faster than most realize. ChatGPT first made the unreal read real; now, tools like Sora 2 makes the unreal look and sound real.

Generative AI didn’t invent impersonation, but it did industrialize it. The only way forward is to verify the people behind every action and request.

The next era of enterprise security won’t be defined by stronger passwords or smarter AI. It will be defined by knowing who is real.

Frequently Asked Questions

How is AI-powered impersonation different from traditional phishing?
Phishing tricks someone into divulging their credentials through fraudulent means. AI impersonation uses generative AI to convincingly mimic the voice, face, or writing style of someone else.

Why can’t MFA or SSO stop impersonation?
MFA and SSO verify that someone has access to a particular login, device, or phone number. Fraudsters have developed numerous ways to bypass or exploit this fundamental weakness.

What does human identity verification actually involve?
Human identity verification means confirming that the person behind a  request or action is really who they claim to be. It combines an adaptive identity check with liveness detection and data integrity validation to prevent the use of deepfakes and other impersonation tools.

How can organizations get started with human identity verification?
Map user actions where trust is critical, such as password resets, account recoveries, onboarding, and privileged access requests. Then integrate human identity verification into those workflows to reliably validate a user’s authenticity before approving the action.

Secure your helpdesk against social engineering and impersonators.
Decline
Accept All Cookies