Anthropic has rolled out mandatory identity verification (KYC) across Claude's platform. If you use Claude regularly — especially on a paid plan — you will hit this at some point. Here's what changed, who it affects most, and what to do about it.

What Changed
Claude now requires some users to verify their identity before continuing to use certain features. The verification is handled by Persona, a third-party identity service. You need:
- A government-issued photo ID (passport, driver's license, or national ID card)
- A live selfie (liveness detection, not just a photo)
- The original document — no screenshots, no scans, no digital versions
The whole process takes about five minutes. Anthropic says the data is used only for verification, won't be used for model training, and isn't stored on their own servers.
Who Gets Triggered First
Anthropic hasn't made this mandatory for every account at once. But the triggers are expanding fast:
| Situation | Verification Required? |
|---|---|
| Signing up for Claude Max | Yes |
| Account flagged by risk systems | Yes |
| Using certain high-privilege features | Yes |
| Regular free-tier usage | Not yet |
| Abnormal usage patterns detected | Yes |
The pattern is clear: the heavier your usage, the sooner you'll see this.
What Happens If You Don't Verify
- Features get restricted or locked
- In some cases, accounts get suspended entirely
- Completing verification doesn't guarantee account survival — region and usage history are checked simultaneously
This last point matters. Several users have reported completing verification successfully and still getting banned, because their account registration region didn't match their actual location or payment method.
The Real Issue Isn't Verification — It's Region Compliance
Most coverage of this focuses on the ID requirement. That's the wrong thing to focus on.
The actual filter is whether your account is considered "legitimately registered" in the first place. Verification just forces the system to check everything at once: your ID, your registration region, your IP history, your payment method. If any of those don't align, verification becomes the mechanism that surfaces the problem — not the solution to it.
Before this change: risk systems could only guess at inconsistencies. After this change: inconsistencies get confirmed against a real identity.
Three Scenarios and What to Do
Scenario 1: You registered from an unsupported region
Highest risk. Even if you pass the ID check, your account may be flagged immediately after because the registration region itself is out of scope. Completing verification won't protect you here.
Scenario 2: You're using Claude across mismatched environments
VPN, foreign phone number, overseas payment on a locally-registered account — these inconsistencies are now verifiable inconsistencies. The risk window has shrunk significantly.
Scenario 3: Your identity, region, and payment all match
Relatively safe. Keep them consistent. Don't switch environments frequently. Avoid anything that looks like automation at scale.
What This Actually Signals
This isn't Claude adding a security layer. It's Claude moving into the same compliance tier as fintech products.
The underlying pressures driving this:
- Abuse at scale — automated accounts, mass content generation, API reselling
- Regulatory pressure — governments are pushing AI platforms toward the same accountability frameworks as financial services
- Monetization integrity — subscription tiers only work if accounts are real and traceable
For individual users, the practical consequence is one thing: the anonymous usage era is ending.
Practical Recommendations
If Claude is your primary AI tool: Start building redundancy now. ChatGPT, Gemini, and Perplexity all cover significant overlap. Don't run a single point of failure.
If you're in a supported region with consistent credentials: Complete verification when prompted. Keep your IP, payment method, and account region aligned going forward.
If you're in an unsupported region: This is not a verification problem you can solve with a better ID. The account-level risk exists regardless. Treat Claude as a supplementary tool, not infrastructure.
Before/After: What Changed for Users
| Factor | Before KYC | After KYC |
|---|---|---|
| Anonymous usage | Possible | Increasingly restricted |
| Region mismatch | Low-risk | Actively verified |
| VPN usage | Tolerated | Flagged against ID |
| Account suspension trigger | Usage behavior only | Behavior + identity + region |
| Heavy users | First to be affected | First to be affected |
Frequently Asked Questions
Does completing verification mean my account is safe? Not automatically. Verification is one check. Region compliance is a separate check. Both need to pass.
Is this permanent or a test rollout? All signals point to permanent. This is infrastructure-level change, not a temporary policy test.
What if I refuse to verify? Feature restrictions increase over time. For paid plan users, full account access will likely require completion eventually.
Does Anthropic see my ID? No — Persona processes it. Anthropic receives a verification result, not the underlying document.
The Bottom Line
If your account is clean — consistent region, payment, and IP — verify when asked and move on. If your account has any inconsistencies built into it, verification is the moment those become permanent record. That's the real risk.
This isn't a Claude-specific story. More AI platforms will follow the same path. Claude is just the first major one to make it explicit.