Claude Code’s Entire Source Code Leaked via npm Source Maps

On March 31, 2026, security researcher Chaofan Shou discovered that Anthropic’s Claude Code — its flagship agentic coding CLI — had its entire source code exposed through a source map file accidentally published to the npm registry. The leak revealed 1,900 files and over 512,000 lines of TypeScript code, exposing the full internal architecture, unreleased features, and system prompts of one of the most widely used AI coding tools.

Intermediate

Screenshot of Chaofan Shou's tweet announcing the Claude Code source code leak via npm source maps
Image credit: Chaofan Shou on X

How the Leak Happened

The exposure was caused by a misconfigured build pipeline. When Anthropic published the Claude Code npm package, a .js.map source map file was included in the distribution. Source maps are debugging files that map minified, compiled JavaScript back to the original source code — they are standard in development but should never ship in production packages.

The source map file contained a reference to the full, unobfuscated TypeScript source, which was downloadable as a zip archive from Anthropic’s R2 storage bucket. As one developer noted, “a single misconfigured .npmignore or files field in package.json can expose everything.” This was a packaging oversight, not a hack.

Anthropic company logo on a wall
Image credit: Fortune

What the Source Code Revealed

The leaked codebase provided an unprecedented look into the architecture of a production-grade AI coding agent:

  • Tool System: Claude Code uses a plugin-like architecture with approximately 40 discrete, permission-gated tools — each capability (file read, bash execution, web fetch, LSP integration) is a separate tool. The base tool definition alone spans 29,000 lines of TypeScript.
  • Query Engine: A 46,000-line module handles all LLM API calls, streaming, caching, and orchestration.
  • Multi-Agent Orchestration: The system can spawn sub-agents (“swarms”) for parallelizable tasks, each with specific tool permissions.
  • IDE Bridge: A bidirectional, JWT-authenticated communication system connects Claude Code to VS Code and JetBrains extensions.
  • Persistent Memory: A file-based storage system maintains user context across sessions.
  • Runtime: Claude Code runs on Bun (not Node.js) and uses React with Ink for terminal UI rendering, with Zod v4 for schema validation.

Approximately 50 slash commands were also documented in the code, along with the full system prompt used to instruct the underlying Claude model.

Unreleased Features Discovered

Beyond the existing architecture, the leak exposed several features not yet publicly available:

  • “Kairos” (Assistant Mode): A new interaction mode visible through feature flags in the codebase.
  • Buddy System: Described as a “Tamagotchi-style companion creature system with ASCII art sprites” — though Hacker News commenters speculated this may have been an April Fool’s joke: “Buddy system is this year’s April Fool’s joke, you roll your own gacha pet that you get to keep.”
  • Undercover Mode: A mode that strips internal Anthropic information from employee open-source contributions.
  • Sentiment Detection: A regex-based system for detecting negative sentiment in user prompts, which is then logged.

Part of a Broader Pattern

The source code leak comes just five days after a separate Anthropic security incident. On March 26, Fortune reported that approximately 3,000 unpublished assets — including details about an unreleased model called “Claude Mythos,” an invite-only CEO retreat, and internal employee documents — were left exposed in an unsecured CMS data store. Anthropic attributed that incident to “human error in the CMS configuration” and stated it was “unrelated to Claude, Cowork, or any Anthropic AI tools.”

Together, these incidents raise questions about Anthropic’s operational security practices, even as the company positions itself as a safety-focused AI lab.

Community Reaction

The leaked repository was quickly archived on GitHub, where it accumulated over 1,100 stars and 1,900 forks within hours. Developer reactions were mixed: some praised the sophisticated tool architecture, while others criticized code quality issues including excessive nesting, repeated utility implementations, and heavy reliance on environment variables. Multiple commenters noted the irony that a tool likely built in part by AI exhibited typical “vibe coding” patterns — messy but functional.

Questions about copyright also surfaced, with some arguing that AI-generated code may lack copyright protection, though this remains legally unsettled.

What This Means

For the broader AI tooling ecosystem, the leak offers a rare, detailed look at how a production AI coding agent is actually built — from permission systems to multi-agent orchestration. For developers and organizations using Claude Code, the exposed source code itself doesn’t create immediate security risks for end users, as no API keys or credentials were included. However, the sentiment detection system and the scope of data the tool collects may prompt users to scrutinize their trust assumptions more carefully.

The incident also serves as a cautionary tale for any team shipping npm packages: always audit your .npmignore and package.json files field before publishing, and never include source maps in production distributions.

Related Coverage

Sources