Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Deep analysis of Claude Code source code leak: What does Anthropic want to do in the future?
Editor’s Note: In the early hours of March 31, Chaofan Shou from Solayer Labs discovered that Anthropic’s Claude Code, when published to npm, unexpectedly came bundled with the complete source code. Within a few hours, the relevant code was mirrored to GitHub, quickly drawing attention.
A slip in a build configuration, yet it also allowed outsiders a rare opportunity to observe the real progress of its product from the code itself. This article is based on this “unexpectedly exposed” source code. After thoroughly reviewing the code, the author tries to answer two questions: Where is Claude Code headed? And what does this mean for users?
From the code, Claude Code is in the process of introducing a series of capabilities not yet made public, including a continuously running autonomous mode (KAIROS), a PROACTIVE mode that can actively execute tasks in between users’ gaps, and a COORDINATOR mechanism for scheduling multiple sub-agents. Taken together, these changes point to a clear direction: AI is shifting from a tool that responds to instructions to a system that can continuously run and proactively carry out tasks. Meanwhile, designs such as permission automation, stealth collaboration, and team memory also reflect a real challenge: when AI truly enters the workflow, how to improve efficiency while controlling risks and boundaries.
So, what does this mean for users?
In fact, these capabilities are not simply stacked one by one; they are gradually building an “agent system”: it has the ability to run in the background, remember across tasks, collaborate in parallel with multiple agents, and directly call tools to complete tasks. In the future, competition may not be only about model capability, but about who can make this system more stable and controllable.
This “unexpected disclosure” itself isn’t the important part—the direction it revealed ahead of time is what matters.
The following is the original text:
Earlier today, @Fried_rice on X discovered that when Anthropic released the Claude Code CLI to npm, it accidentally included a source map file.
Specifically, in version 2.1.88 of the @anthropic-ai/claude-code package, there is a 59.8MB file, cli.js.map, which embeds the complete original TypeScript source code in the sourcesContent field. This isn’t a hack—it’s a build-configuration oversight: the debugging artifacts were packaged together into the production environment. But it also unintentionally exposed Claude Code’s future direction.
I spent a few hours reading through these sources. Below are a few key points I noticed, and what they might mean for users.
Key Features
Automated agentic intelligence is coming
The most frequently appearing feature flag in the codebase is called KAIROS (appearing 154 times). Based on the code, this seems to be a “autonomous guard process mode,” which can turn Claude Code into a continuously running agent. It includes background sessions, a memory integration mechanism called “dream,” GitHub webhook subscriptions, push notifications, and channel-based communication.
There is also a PROACTIVE mode (appearing 37 times), which allows Claude to work independently between user messages. The system sends “tick” prompts to keep the agent running, and Claude decides for itself what to do each time it is “woken up.” The prompt explicitly says: “You are running autonomously,” and instructs the model to “seek useful work,” and to “act based on best judgment, not request confirmation.”
COORDINATOR_MODE (appearing 32 times) goes even further—it turns Claude into an orchestrator that can generate and manage multiple parallel worker agents. This coordinator is responsible for completing research, implementation, and validation by dispatching tasks to different workers. The system prompt also includes detailed instructions on how to write prompts for workers, when to continue using existing workers, when to generate new agents, and how to handle worker failures.
Permission prompts may disappear
There is a flag called TRANSCRIPT_CLASSIFIER (appearing 107 times). Based on the context, it seems like an “automatic mode” that uses an AI classifier to automatically approve tool permissions. If this functionality ships, the frequent permission-confirmation prompts that currently interrupt workflows might become optional—or even disappear entirely for trusted operations.
Model codenames and versioning system
The source also reveals some internal codenames for Claude models:
Capybara appears to be a variant of Claude 4.6. The comments mention “Capybara v8” and record some bug fixes, such as: a hallucination rate of 29–30% (compared to v4’s 16.7%), a tendency to over-comment code, and a mechanism called “assertiveness counterweight.”
Fennec was once a codename, later migrated to Opus 4.6.
Numbat has not been released yet. There’s a comment saying: “Delete this section when numbat is released.”
The code also mentions opus-4-7 and sonnet-4-8, and explicitly states that these version numbers should not appear in public commits—suggesting these versions already exist internally.
“Undercover Mode”: Anonymous participation in open source
There is also a feature called “Undercover Mode,” which is specifically used by Anthropic employees when enabling Claude Code to submit code to public repositories. This mode removes all AI-related identifiers, including:
· commits contain no AI attribution
· hide model codenames
· remove all mentions of “Claude Code” or AI
· and it won’t even tell the model its own model type
The prompt clearly states: “You are running in UNDERCOVER mode in a public/open-source repository. Your commit messages, PR titles, and body text must not include any internal Anthropic information. Do not reveal your identity.”
And there is no mandatory kill switch—so long as the system can’t confirm that the current repository is internal, this mode is enabled by default.
Voice mode
VOICE_MODE appears 46 times, indicating that the system has integrated voice interaction capabilities, including speech-to-text and text-to-speech.
A “digital pet” system
This part is a bit interesting. The code includes a hidden BUDDY system, essentially a “digital pet” in the terminal (similar to Tamagotchi). It includes:
· 18 creatures (duck, goose, cat, dragon, octopus, owl, penguin, turtle, ghost, hexagonal dinosaur, etc.)
· rarity system (legendary-tier probability: 1%)
· visual decorations (crown, top hat, halo, wizard hat, etc.)
· attribute values (DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK)
· and even a “shiny” version
Among them, the species name capybara (capybara) is obfuscated via String.fromCharCode(), with the purpose of avoiding triggering the internal leak-detection system—this also suggests that these codenames are sensitive.
Other noteworthy features
FORK_SUBAGENT: allows for forking itself into multiple parallel agents
VERIFICATION_AGENT: used for independent, adversarial verification of results
ULTRAPLAN: advanced planning capability
WEB_BROWSER_TOOL: browser automation
TOKEN_BUDGET: allows explicitly specifying a token budget (e.g., “+500k” or “spend 2M token”)
TEAMMEM: supports shared memory across teams
What does this mean
A few core judgments:
Claude Code is clearly moving toward “high autonomy.” Features such as KAIROS, PROACTIVE, and COORDINATOR point to a future where Claude can run as a background guard process, continuously monitoring repositories and proactively executing tasks.
Permission friction is being reduced. The automated approval mechanism indicates they’re cutting down on frequent manual confirmation steps.
The model versioning system is far more complex than the public API. Internally, there are multiple variants, fast modes, and a codename体系, corresponding to different capabilities and issues.
Security mechanisms are given very high priority. Just the Bash command validation alone is more than 2500 lines of code, plus sandboxing, undercover mode, and input sanitization.
The product is introducing “personality.” The Buddy system suggests that Claude Code isn’t just a tool—it’s trying to become a “partner.”
How to check it yourself
As of the time of writing, these source files are still available on npm. Download @anthropic-ai/claude-code@2.1.88, find cli.js.map, parse the JSON, and extract the sourcesContent field. I won’t redistribute the code again, but it’s reasonable to analyze and discuss publicly accessible content.
The initial discovery is credited to @Fried_rice on X.
[Original Link]
Click to learn about LyTemp BlockBeats recruiting openings
Welcome to join the LyTemp BlockBeats official community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Twitter Official Account: https://twitter.com/BlockBeatsAsia