Agents Can Act. Only You Should Authorize.

by
Manish Thakrani
North Korea Blog Post Header

Workforce Impersonation Report

How AI-enabled impersonation is redefining identity security and shaping the future of enterprise trust.

Your AI agents don't need to be hacked. They just need to be authorized by someone who isn't you, and right now, nothing stops that from happening.

An OpenClaw user's agent sent over 500 unsolicited messages to contacts after being given access to iMessage — "by the time the problematic behavior was observable, irreversible actions had already occurred." Another user's agent accidentally started a dispute with their insurance company because of a misinterpreted response. A developer's autonomous coding agent was hijacked by a single malicious GitHub issue, pulling private code into public repositories without any human direction.

The pattern is the same every time: an agent had permission, an approval came through, and the wrong thing happened. Not because the approval mechanism was absent — but because nobody checked who was approving.

TL;DR

  • AI agents can act faster than any human — and that's the feature that's now a liability.
  • Human-in-the-loop solved for presence. It didn't solve for identity. Approval proves someone responded — not that the right person responded.
  • Nametag verifies the real human behind every agent authorization and returns a clear pass or fail decision. The agent can only proceed when the right person confirms it.
  • The agent has no standing to authorize anything. It can only act when a verified human says yes — and Nametag confirms it was the right human.
  • We built an open-source reference implementation that wires this into the A2H protocol as an MCP server. It works with Claude Code and OpenClaw today.

Act I: The age of autonomous agents

For the first wave of AI agents, autonomy was the feature. Set them loose, give them tools, let them execute. An agent connected to your files, your infrastructure, your communications could act faster than any human — and that was the point.

Act II: Human-in-the-loop

The response was the right one. Frameworks emerged requiring agents to pause before consequential actions and get a human to approve. The A2H protocol defined a standard for this: agents send an AUTHORIZE request, a human receives a notification, confirms, and the agent proceeds only on that confirmation.

This was meaningful progress. Agents stopped acting unilaterally. A human touchpoint now existed before every high-stakes decision.

But it introduced a new assumption — one that turned out to be fragile.

Human-in-the-loop solved for presence, but anyone with access to the approval channel can respond. Anyone who picks up the phone can confirm. Approval proves presence not identity.

Act III: The right human in the loop

Nametag's Deepfake Defense™ engine verifies the real human behind an action and confirms identity against a government-issued ID.   Instead of surfacing signals to teams, it returns a clear pass/fail decision. 

Applied to agent authorizations, every confirmation is biometrically bound to a specific, verified person. Not just someone confirmed. You confirmed.

How it works:

  1. Enroll once. Scan your government ID and take a selfie . This creates a biometrically-bound subject ID.
  2. Selfie Reverification on every sensitive action. Each approval is matched back to the enrolled identity through Selfie Chaining™.Deepfake Defense™ on every check. Confirms there is a  live human. Not a photo, mask, or synthetic face.
✓ Approved — right person
You: "Delete the old backup files"
Agent: "This is destructive. Shall I proceed?"
You: "Yes"
Agent: [Selfie Reverification sent to your phone]
…Spatial Selfie™…
Agent: "Identity confirmed — Alice Smith. Deleting now."
✗ Denied — wrong person
Agent: [Selfie Reverification sent to your phone]
…a different person completes the Spatial Selfie™…
Agent: "Verification completed, but the identity doesn't match the enrolled owner. Action denied."

This is not about giving agents an identity

There's a different framing gaining traction: issuing IAM credentials directly to agents so they act with their own verified identity in enterprise systems. That solves a different problem — authorizing what the agent can do.

What we're describing here is the opposite. The agent has no identity of its own. It has no standing to authorize anything. It can only act when a verified human explicitly says yes — and Nametag confirms that the human who said yes is the human who is supposed to. Every authorization creates an auditable record which includes which account acted, and which verified human approved it. 

This matters because agents can take actions that can't be undone. The value of human oversight comes precisely from it being human — a real person, accountable, present, making a deliberate choice. A credential token doesn't carry that. A biometrically verified identity does.

A reference implementation

We built an open-source reference integration that wires Nametag identity verification into the A2H protocol as an MCP server — a single tool that any MCP-compatible agent can call before taking a sensitive action.

Reference implementation

This is a reference implementation — built to prove the concept and invite the community to explore the integration pattern alongside us. It is not production-ready and is not intended to be deployed as-is. Full source and documentation at github.com/nametaginc/nametag-a2h.

curl -fsSL https://raw.githubusercontent.com/nametaginc/nametag-a2h/main/install.sh | bash

When an agent wants to delete files, push to production, or send an external message, it calls nametag_authorize. The enrolled owner receives a Selfie Reverification request on their phone. If the biometric matches, the action proceeds. If it doesn't, it's denied. The agent never touches enrollment — that's a human-only act, done once at setup.

It works with Claude Code and OpenClaw today, installed in a single command. Enrollment takes about five minutes. After that, every re-verification takes seconds; fast enough that it doesn't interrupt the flow. You'll need active subscriptions to both Claude Code (or OpenClaw) and Nametag. Reach out to our account team or request access below.

Know who and where approvals happen

Verifying who approved is the first layer. The next one is knowing where they were when they did it.

Nametag Location Integrity™ adds a verified location to the authorization record. When combined with identity verification, it produces an auditable record that a specific person was in a specific place at a specific moment in time — cryptographically attested and tamper-resistant.

This capability exists on the Nametag platform today. Bringing it into agent authorization flows — so deployment to production requires not just the right person's face but their verified physical presence in a trusted location — is the natural next step. 

Secure your helpdesk against social engineering and impersonators.
Decline
Accept All Cookies