Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
I ultimately gave up on Claude Code.
Originally, Claude was already frequently "banning accounts," and now it has directly moved to KYC steps.
Although I haven't been prompted yet, I'm also worried that the accounts I purchased might not be reliable.
This made me think: AI starting to do real-name verification—does that really enhance security, or does it subtly raise the entry barrier?
On the surface, it seems that Anthropic is moving toward compliance; but from another perspective, it also exposes a somewhat unseemly reality: when tools become powerful enough to influence the real world, a company's first reaction is no longer to trust users but to manage them.
In the past, we used search engines, chat rooms, and even early ChatGPT, without ever needing to prove "who I am."
But now, the process of using an AI assistant increasingly resembles opening an account or making cross-border transfers—KYC, verification, compliance—step by step.
I think this isn't just an additional step but three solid barriers:
🔸 First is the barrier itself.
For users in mainland China and those highly sensitive about privacy, KYC directly blocks some people from access.
Verification failures, regional restrictions, data requirements—all essentially lead to user loss.
They will only turn to lighter tools.
🔸 Second is the data issue.
Anthropic states that this data won't be used for training models or marketing but will be handled by third parties (like Persona).
But in reality, there are many cases of data leaks, and even if the rules are clear, users' concerns are hard to fully dispel.
🔸 Finally, the boundary.
My understanding is that the more powerful AI becomes, the more it needs a traceable responsibility mechanism.
But the problem is, if all mainstream AIs move toward strong real-name systems, how much "anonymous" space can users still retain?
Will the kind of unrecorded, untagged thinking, experimentation, and exploration gradually disappear?
Continuing to look for other AI tools...