Anthropic’s Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a “small group of unauthorized users,” Bloomberg reports. An unnamed member of the group, identified only as “a third-party contractor for Anthropic,” told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor’s access and “commonly used internet sleuthing tools.”
Anthropic’s most dangerous AI model just fell into the wrong hands
A Discord group has had access to the Mythos model for two weeks.
A Discord group has had access to the Mythos model for two weeks.


The Claude Mythos Preview is a new general-purpose model that’s capable of identifying and exploiting vulnerabilities “in every major operating system and every major web browser when directed by a user to do so,” according to Anthropic. Official access to the model is limited to a handful of companies through the Project Glasswing initiative, including Nvidia, Google, Amazon Web Services, Apple, and Microsoft. Governments are also eyeing the technology. Anthropic currently has no plans to release the model publicly due to concerns that it could be weaponized.
“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson said in a statement to Bloomberg. Anthropic currently has no evidence that the unauthorized access is impacting the company’s systems or goes beyond the third-party vendor’s environment.
The model was reportedly accessed illicitly on April 7th, the same day that Anthropic announced it was releasing Mythos to a limited number of companies for testing. The group that gained the unauthorized access has not been publicly identified, though Bloomberg reports that its members are part of a Discord channel that seeks out information about unreleased AI models.
The group accessed Mythos by using knowledge of Anthropic’s other model formats obtained from a recent Mercor data breach to make “an educated guess” about its online location. Members have been using Mythos regularly since gaining access — providing screenshots and a live demonstration of the model as evidence to Bloomberg — though reportedly not for cybersecurity purposes in an attempt to avoid detection by Anthropic. Other unreleased Anthropic AI models have also been accessed by the group, according to Bloomberg.
Most Popular
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Sony’s PlayStation 5 is $200 off for the first time since December
- Framework is building a better couch keyboard because everyone hates the Logitech one
- The unraveling of Dan Crenshaw
- We translated the Palantir manifesto for actual human beings











