Anthropic Faces Back-to-Back Internal Leaks After Packaging Error

Anthropic Faces Back-to-Back Internal Leaks After Packaging Error
TechCrunch

Key Points

  • Anthropic leaked nearly 3,000 internal files, including a draft blog post about an unreleased model.
  • A packaging error in Claude Code version 2.1.88 exposed about 2,000 source code files and over 512,000 lines of code.
  • The company described the incidents as human‑error packaging issues, not security breaches.
  • Claude Code is a command‑line tool for developers to write and edit code using Anthropic’s AI.
  • OpenAI recently halted its Sora video‑generation product, citing a shift toward developer‑focused offerings.
  • Security researcher Chaofan Shou identified the leak and posted about it on X.
  • Developers quickly analyzed the leaked code, labeling Claude Code a production‑grade developer experience.
  • Anthropic must improve internal security and packaging processes to avoid further accidental disclosures.

Anthropic experienced two consecutive incidents in which internal files were unintentionally exposed. The first leak, reported last week, made nearly 3,000 internal documents public, including a draft blog post about an unreleased model. The latest incident occurred when the company released version 2.1.88 of its Claude Code package, accidentally bundling roughly 2,000 source code files and over 512,000 lines of code. Anthropic labeled the events as human‑error packaging issues rather than security breaches. The leaks have drawn attention from competitors and developers, especially as OpenAI recently halted its Sora video‑generation product amid rising competition from Claude Code.

Back-to-Back Internal Exposures

Anthropic, a company that has built its public identity around careful AI development and risk transparency, suffered two separate incidents in which internal materials were unintentionally released to the public. The first incident, reported last week, involved the accidental publication of nearly 3,000 internal files. Among those files was a draft blog post describing a powerful new model that the company had not yet announced.

The second incident occurred when Anthropic pushed out version 2.1.88 of its Claude Code software package. A packaging error caused the inclusion of a file that exposed roughly 2,000 source code files and more than 512,000 lines of code—essentially the full architectural blueprint for one of its most important products. Security researcher Chaofan Shou quickly noticed the leak and posted about it on X.

Company Response

Anthropic responded to multiple outlets with a statement describing the events as a “release packaging issue caused by human error, not a security breach.” While the wording suggests a measured stance, internal reactions were likely more concerned, given the sensitivity of the exposed material.

Impact on Claude Code and the Competitive Landscape

Claude Code is not a minor offering; it is a command‑line tool that enables developers to use Anthropic’s AI for writing and editing code, and it has become a formidable competitor in the developer‑focused AI market. The Wall Street Journal noted that OpenAI recently pulled its video‑generation product Sora from the public after just six months, shifting focus toward developers and enterprises—a move partly attributed to Claude Code’s growing momentum.

The leaked material did not contain the AI model itself but rather the software scaffolding that directs the model’s behavior, tool usage, and limitations. Developers swiftly began publishing detailed analyses, describing Claude Code as a “production‑grade developer experience, not just a wrapper around an API.” While competitors may find the architecture instructive, the pace of AI development means any advantage could be short‑lived.

Future Outlook

Anthropic now faces the challenge of reinforcing its internal security and packaging processes to prevent further accidental disclosures. The incidents underscore the delicate balance AI companies must maintain between openness, responsible development, and protecting proprietary technology.

#Anthropic#AI safety#software leak#code security#Claude Code#OpenAI#Sora#developer tools#security breach#technology news
Generated with  News Factory -  Source: TechCrunch

Also available in:

Anthropic Faces Back-to-Back Internal Leaks After Packaging Error | AI News