Anthropic Quietly Rolls Out Passport + Selfie Verification to Access Some Claude Features
Anthropic has launched identity verification — requiring a government-issued ID and live selfie via Persona — to gate access to certain Claude features. The company hasn't disclosed which features are gated or what triggers a verification request. Privacy advocates are pushing back given many users chose Claude specifically over other AI providers for its privacy stance.
Original sourceAnthropic quietly rolled out identity verification on April 15, 2026, requiring users to submit a government-issued ID (passport, driver's license, or national ID) alongside a live selfie through Persona, a third-party identity verification platform. The move was announced via a support article rather than a blog post — an unusual communications choice for a policy with significant privacy implications.
The company has not disclosed exactly which features require verification or what user behavior triggers the requirement. Multiple users on Hacker News and X reported hitting the verification gate without warning while using Claude normally. Anthropic's support documentation states that identity data is "held by Persona, not Anthropic directly" — a framing that has done little to reassure critics who point out that data shared with a third-party is still data shared.
The backlash has been sharp, particularly because Anthropic has positioned Claude as a privacy-respecting alternative to ChatGPT, which has its own account verification and usage tracking. The company's Constitutional AI framing had cultivated trust among privacy-conscious users, researchers, and medical professionals who share sensitive information in their workflows. Several prominent researchers posted publicly that they were reconsidering their Claude API usage.
Anthropic has not commented publicly beyond the support article. The move is likely connected to ongoing pressure around AI use in high-stakes or regulated domains — identity verification may be required by enterprise customers or insurance/liability frameworks for certain capabilities. But the opaque rollout has created more questions than the policy itself.
For developers and heavy Claude users, the practical question is what triggers the gate. Until Anthropic publishes clearer documentation, the verification requirement introduces new friction and uncertainty into workflows that previously felt friction-free.
Panel Takes
The Builder
Developer Perspective
“The support-article-only announcement is a communications misfire — this deserved a transparent blog post explaining exactly which features are gated and why. Developers building on the Claude API need to know whether their users will hit verification walls mid-workflow. The BYOK/API tier should be explicitly exempt or the trust damage accelerates.”
The Skeptic
Reality Check
“This is the predictable outcome of AI companies pursuing enterprise and government contracts — identity verification becomes a compliance checkbox. The positioning as 'privacy-first' was always going to strain against the commercial reality of selling to regulated industries. Anthropic isn't different, they're just later to the verification wall.”
The Futurist
Big Picture
“Identity verification for AI access is coming industry-wide — either from regulatory mandates or liability pressure. The real question is whether verification becomes a privacy-preserving system (zero-knowledge proofs, age verification without ID copies) or a surveillance mechanism. Anthropic's use of a third-party vendor suggests they chose the path of least implementation resistance, not the most privacy-preserving one.”