Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
I just saw some gossip in the group—Anthropic leaked their own source code again😂
The latest npm package for Claude Code, v2.1.88, was just released yesterday, and someone discovered it contained a 60MB source map file. Those in the know understand that once this kind of thing is out, it’s like directly giving away the TypeScript source code.
I looked at the data: the previous version, v2.1.87, was only 17MB. This version suddenly jumped to 31MB, and after unzipping, it reaches 60MB. It includes 1,906 source files—neatly organized—covering internal APIs, telemetry systems, encryption tools, IPC communication—all thoroughly exposed.
What’s the most outrageous? This isn’t the first time. They leaked it once when they first released it last February, then quietly fixed it. But a year later, they stepped into the same trap again. Whether it’s interns taking the blame or bugs in the build scripts, we’re not sure, but making the same mistake twice is pretty ridiculous.
Someone on GitHub already organized the source code, and that repository by ghuntley has nearly a thousand stars. Of course, the leak involves the CLI client code; model weights and user data weren’t affected, so there’s no immediate risk to regular users.
But what’s interesting here is—an AI company’s own coding tools leaked their own code. To some extent, this also becomes a classic case contributing to the “AI security” track.
Friends involved in auditing within the community can dig into this and see if there are any hidden “Easter eggs” that haven’t been uncovered.
Don’t ask me what I think—personally, I see this as a free window for white-hat hackers to conduct audits. As for Anthropic internally, they probably need to review their CI/CD processes thoroughly.
This isn’t investment advice, just some gossip. 🍉