Epic leak! 512,000 lines of Claude Code source code have been open-sourced!

robot
Abstract generation in progress

Anthropic has suffered what could be the largest code leak in the industry. The complete source code of Claude Code has been fully exposed due to a basic packaging-layer mistake. More than 510,000 lines of TypeScript code, 40-plus tool modules, and several core features not yet released are now “bared” to developers worldwide.

This is an accident, and also a warning. Although this leak did not affect the Claude core model weights or user data, it fully exposed Claude Code’s internal architecture logic, system prompt design, and tool-calling mechanisms—along with several unreleased features and potential security logic—to the public eye.

Industry insiders believe this incident will substantially compress the knowledge barrier for engineering AI agents, accelerating the competitive evolution of the entire developer ecosystem.

It’s worth noting that this is not the first time Anthropic has made a mistake like this. In February 2025, the company’s early Claude Code version was exposed due to the same type of source map oversight. This leak, further intensifies external doubts about the software supply-chain security maturity of this AI star company valued at more than $18 billion.

A .map file, igniting 510,000 lines of code

Fuzzland security researcher Chaofan Shou was the first to expose the incident on X. In Anthropic’s official npm package @anthropic-ai/claude-code, version 2.1.88, a roughly 60MB cli.js.map file was accidentally included.

In the cli.js.map file, there are two key arrays: sources (a list of file paths) and sourcesContent (the corresponding complete source code contents). The two arrays’ indices match one-to-one. This means that anyone only needs to download this JSON file to fully extract all original code, with a very low operational barrier.

According to analysis, this source map file contains a total of 4,756 source files: 1,906 are Claude Code’s own TypeScript/TSX source files, and the remaining 2,850 are node_modules dependencies. The overall code volume exceeds 512,000 lines.

Within hours after the incident was exposed, the star count for mirrored repositories on GitHub surged past 5,000. Anthropic has removed this source map from the npm package. However, the early versions of the npm package have already been archived by multiple parties, and the related content continues to circulate in the developer community.

Full architecture revealed for the first time

The restored source code provides the most complete view of the Claude Code architecture to date.

The code shows that Claude Code builds its terminal interface using the React and Ink frameworks, runs on the Bun runtime, and has a core REPL loop that supports natural language input and slash commands. Under the hood, it interacts with the LLM API through a tool system.

On the tool layer, the code includes more than 40 independent modules, covering file read/write, Bash command execution, LSP protocol integration, and sub-agent generation capabilities—forming a fully featured “universal toolbox.”

On the reasoning layer, a core file named QueryEngine.ts is as large as 46,000 lines of code. It handles all aspects of the reasoning logic, token counting, and the entire “thought-chain” loop.

On the multi-agent layer, the leaked code includes a coordinator (multi-agent coordinator) module and a bridge module. The latter is responsible for connecting to mainstream IDEs such as VS Code and JetBrains, showing that Claude Code already has engineering capabilities for multi-machine collaboration and deep embedding into development environments.

Unreleased features show up unexpectedly

Among the items most likely to draw attention in this leak are several features that were never publicly released.

The Kairos code-named mode is perhaps the most notable one. The code shows that this is an autonomous guardian process with a persistent lifecycle. It supports background sessions and memory integration, meaning Claude can function as a resident background AI agent—continuously handling tasks and accumulating understanding of the project.

Another embedded electronic pet system, called “Buddy System,” is also built into the code. It includes 18 species, rarity levels, shiny variants, and attribute statistics—an obviously playful design by Anthropic engineers, placed alongside the core architecture in the codebase.

In terms of mode design, the code also reveals “Coordinator Mode,” which allows Claude to dispatch subordinate agents to run in parallel, and “Auto Mode,” an AI classifier that can automatically approve tool permissions, intended to simplify the operation confirmation flow.

In addition, a feature named “Undercover Mode” has sparked controversy—according to the code, when Anthropic employees perform operations in public repositories, this mode will automatically activate, erase any AI-related traces in the submission history, and cannot be manually disabled.

Security risks and supply-chain warning

Security researchers point out that although this leak does not directly involve model weights or user privacy data, the potential risks cannot be ignored.

Reports say the leaked content fully exposes internal security logic and may reveal attack vectors such as server-side request forgery (SSRF), providing a foothold for future security research. The open-source community has already begun exploring forked versions based on the leaked code and trying to combine them with other agent frameworks.

From an industry perspective, npm is the world’s largest JavaScript package repository, handling millions of downloads every day. Such packaging mistakes signal that while companies pursue fast release cycles, they must strengthen source-file review mechanisms in their CI/CD pipelines.

The direct warning to all developers who publish npm packages is: before publishing, make sure that .map files are included in the release artifacts. A single sourcesContent field line is enough to expose the complete source code to the world.

The agent ecosystem may be heading toward an acceleration turning point

Judging by the impact on the industry, the significance of this incident may go beyond a mere technical accident.

A complete engineering implementation plan for a top-tier AI agent has been unexpectedly disclosed, which will significantly lower the knowledge barrier in this field. Developers can directly study and draw lessons from Claude Code’s architectural design, prompt logic, and tool-calling mechanisms, shortening the exploration cycle for independent R&D.

At the same time, the incident also unexpectedly confirms Anthropic’s technical accumulation in agent engineering—whether it’s the multi-agent coordination mechanism or the design of a persistent background guardian process—both show engineering depth beyond that of comparable products.

As an Anthropic ecosystem extension tool, Claude Code mainly targets professional developers and competes with AI coding assistants such as GitHub Copilot and Cursor. Whether the public release of its source code can, amid intensifying competitive pressure, indirectly accelerate the industry’s collective innovation in AI agent architectures is something the industry is closely watching in its subsequent response.

Risk warning and disclaimer

        There are risks in the market; invest cautiously. This article does not constitute personal investment advice, and it does not take into account the special investment objectives, financial situations, or needs of individual users. Users should consider whether any opinions, viewpoints, or conclusions in this article align with their specific circumstances. Any investment made based on this is at your own risk.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin