Anthropic, the AI company behind the popular Claude Code tool, accidentally leaked the source code for its AI agent due to human error. The incident, which exposed approximately 512,000 lines of code across 1,900 to 2,300 files, was caused by a release packaging issue rather than a security breach. The leaked code, which included unreleased features like an always-on background agent and a companion pet system, quickly spread online, prompting developers to recreate and share it. Anthropic has since issued a copyright takedown notice to GitHub to remove the leaked code.
The company acknowledged the error and stated that no sensitive customer data or credentials were exposed. It is rolling out measures to prevent similar incidents in the future. Meanwhile, developers have praised the leak for offering insights into the tool's architecture and functionality, while also raising concerns about the company's internal practices and security measures.
Anthropic has faced previous data leaks, including the exposure of internal documents and plans for an upcoming AI model. The latest incident has sparked discussions about the company's handling of sensitive information and the broader implications for AI development and security.