Unauthorized Group Gains Access to Anthropic’s Mythos Cybersecurity Tool, Report Says

Unauthorized Group Gains Access to Anthropic’s Mythos Cybersecurity Tool, Report Says
TechCrunch

Key Points

  • Bloomberg reports a private forum accessed Anthropic's Mythos AI security tool via a third‑party contractor.
  • Anthropic confirmed an investigation but found no evidence of impact on its own systems.
  • The group, active on a Discord channel, shared screenshots and a live demo of Mythos.
  • Mythos was released to select vendors, like Apple, under Project Glasswing to limit misuse.
  • Anthropic warned the tool could be weaponized if it falls into malicious hands.
  • The breach highlights supply‑chain risks inherent in AI model distribution.
  • No concrete damage or data loss has been reported so far.

A private online forum has reportedly breached Anthropic’s newly unveiled cybersecurity AI, Mythos, according to Bloomberg. The group, linked to a Discord channel that hunts unreleased AI models, accessed the tool through a third‑party contractor that works with Anthropic. Anthropic confirmed it is investigating the incident but said no evidence yet shows the breach affected its own systems. Mythos, rolled out to a handful of vendors such as Apple under the Project Glasswing initiative, was designed to strengthen enterprise security, raising concerns that the tool could be repurposed by malicious actors.

Bloomberg reports that a private online forum has managed to obtain Anthropic’s newly announced AI security product, Mythos, shortly after its public unveiling. The group, whose members congregate on a Discord channel dedicated to unreleased AI models, leveraged access granted to a third‑party contractor that provides services for Anthropic. By exploiting that foothold, the forum’s participants were able to run Mythos and share screenshots and a live demonstration with Bloomberg.

Anthropic’s spokesperson confirmed the company is "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third‑party vendor environments." The firm added that, to date, it has found no evidence that the alleged activity has impacted Anthropic’s own systems or data.

According to the Bloomberg source, the unauthorized users guessed the model’s online location based on Anthropic’s naming conventions for other models. Their motive appears to be curiosity rather than sabotage; a source told Bloomberg the group is "interested in playing around with new models, not wreaking havoc with them." Nonetheless, the ability to run Mythos outside the intended vendor pool raises red flags for Anthropic, which marketed the tool as a defensive asset for corporate security.

Mythos was released under Anthropic’s Project Glasswing to a limited set of partners, including Apple, to prevent misuse by bad actors. The AI is billed as a powerful ally for enterprise security teams, capable of detecting and responding to threats in real time. Anthropic warned that, in the wrong hands, the same capabilities could be weaponized against the very organizations it aims to protect.

The breach underscores the challenges of securing AI tools that rely on third‑party ecosystems. While Anthropic’s vetting processes likely included contractual safeguards, the incident shows that a single compromised vendor can open a backdoor to sophisticated technology. Security experts note that as AI models become more integral to critical infrastructure, supply‑chain risks will demand heightened scrutiny.

Anthropic has not disclosed whether it will suspend the compromised contractor’s access or issue a broader recall of Mythos. The company’s next steps will likely involve a thorough audit of its vendor management practices and possibly tighter controls on model distribution.

For now, the incident remains under investigation, and Anthropic has not reported any concrete damage or data loss stemming from the unauthorized use. The episode serves as a cautionary tale for firms racing to deploy advanced AI security solutions without fully accounting for the vulnerabilities introduced by external partners.

#Anthropic#Mythos#cybersecurity#AI security#data breach#third‑party vendor#Discord#Project Glasswing#enterprise security#AI tool#unauthorized access
Generated with  News Factory -  Source: TechCrunch

Also available in: