On March 31, 2026, security researcher Chaofan Shou discovered that Anthropic’s Claude Code — its flagship agentic coding CLI — had its entire source code exposed through a source map file accidentally published to the npm registry. The leak revealed 1,900 files and over 512,000 lines of TypeScript code, exposing the full internal architecture, unreleased features, and system prompts of one of the most widely used AI coding tools.
Intermediate
The exposure was caused by a misconfigured build pipeline. When Anthropic published the Claude Code npm package, a .js.map source map file was included in the distribution. Source maps are debugging files that map minified, compiled JavaScript back to the original source code — they are standard in development but should never ship in production packages.
The source map file contained a reference to the full, unobfuscated TypeScript source, which was downloadable as a zip archive from Anthropic’s R2 storage bucket. As one developer noted, “a single misconfigured .npmignore or files field in package.json can expose everything.” This was a packaging oversight, not a hack.
The leaked codebase provided an unprecedented look into the architecture of a production-grade AI coding agent:
Approximately 50 slash commands were also documented in the code, along with the full system prompt used to instruct the underlying Claude model.
Beyond the existing architecture, the leak exposed several features not yet publicly available:
The source code leak comes just five days after a separate Anthropic security incident. On March 26, Fortune reported that approximately 3,000 unpublished assets — including details about an unreleased model called “Claude Mythos,” an invite-only CEO retreat, and internal employee documents — were left exposed in an unsecured CMS data store. Anthropic attributed that incident to “human error in the CMS configuration” and stated it was “unrelated to Claude, Cowork, or any Anthropic AI tools.”
Together, these incidents raise questions about Anthropic’s operational security practices, even as the company positions itself as a safety-focused AI lab.
The leaked repository was quickly archived on GitHub, where it accumulated over 1,100 stars and 1,900 forks within hours. Developer reactions were mixed: some praised the sophisticated tool architecture, while others criticized code quality issues including excessive nesting, repeated utility implementations, and heavy reliance on environment variables. Multiple commenters noted the irony that a tool likely built in part by AI exhibited typical “vibe coding” patterns — messy but functional.
Questions about copyright also surfaced, with some arguing that AI-generated code may lack copyright protection, though this remains legally unsettled.
For the broader AI tooling ecosystem, the leak offers a rare, detailed look at how a production AI coding agent is actually built — from permission systems to multi-agent orchestration. For developers and organizations using Claude Code, the exposed source code itself doesn’t create immediate security risks for end users, as no API keys or credentials were included. However, the sentiment detection system and the scope of data the tool collects may prompt users to scrutinize their trust assumptions more carefully.
The incident also serves as a cautionary tale for any team shipping npm packages: always audit your .npmignore and package.json files field before publishing, and never include source maps in production distributions.
