Anthropic, once again, unintentionally disclosed confidential data related to its Claude AI family. This time, it involved the internal source code of Claude Code, the AI coding assistant. The incident, which occurred on March 31, 2026, was not due to a cyberattack but likely a packaging error during an update rollout.
The breach came to light when security researcher Chaofan Shou discovered that the latest update of Claude Code (version 2.1.88) contained a source map file in its npm package, named cli.js.map. This file allowed individuals to reconstruct and view the tool’s original code. Following the leak, the source map quickly circulated within developer communities, with users downloading, analyzing, and sharing the exposed code.
The mishap was likely a result of a technical oversight during the update process. Typically, software like Claude Code is written in human-readable languages, such as TypeScript, before being compiled into a compressed format for release. This process aims to safeguard the original code from easy access or reverse engineering.
In this instance, Anthropic mistakenly included a source map file in the public release, intended for internal use to assist developers in debugging. By including this file in the npm package, the company inadvertently enabled anyone to reconstruct and access the complete codebase, revealing the internal workings, architecture, and design of Claude Code.
The leaked code unveiled over 500,000 lines of source code from Claude Code across nearly 2,000 internal files. It exposed crucial components of Anthropic’s AI tool, including internal APIs, telemetry systems, encryption logic, and communication mechanisms between various parts. Moreover, developers exploring the code unearthed hints about upcoming features like a reactive coding assistant and an “always-on” AI agent named “KAIROS.”
Furthermore, the leak reportedly contained internal developer comments, providing insights into how Anthropic’s engineers conceptualize the code, including discussions on feature performance and complexity.
This incident is not an isolated case for Anthropic, as a similar leak occurred in early 2025, also involving a source map file in a public release. The recurrence of such sensitive leaks raises concerns about Anthropic’s release procedures and quality control, particularly given its prominent position in the AI industry. Users are questioning how a company emphasizing AI safety and reliability could allow repeated oversights.
The article ends here.

