Technology
- Home
- Technology
- News
Anthropic’s most dangerous AI model just fell into the wrong hands
Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a "small group of unauthorised users," Bloomberg reports. An unnamed member of the group, identified only as "a third-…

Published a day ago on Apr 27th 2026, 2:00 pm
By Web Desk

Anthropic’s Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a “small group of unauthorized users,” Bloomberg reports. An unnamed member of the group, identified only as “a third-party contractor for Anthropic,” told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor’s access and “commonly used internet sleuthing tools.”
The Claude Mythos Preview is a new general-purpose model that’s capable of identifying and exploiting vulnerabilities “in every major operating system and every major web browser when directed by a user to do so,” according to Anthropic. Official access to the model is limited to a handful of companies through the Project Glasswing initiative, including Nvidia, Google, Amazon Web Services, Apple, and Microsoft. Governments are also eyeing the technology. Anthropic currently has no plans to release the model publicly due to concerns that it could be weaponized.
“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson said in a statement to Bloomberg. Anthropic currently has no evidence that the unauthorized access is impacting the company’s systems or goes beyond the third-party vendor’s environment.
The model was reportedly accessed illicitly on April 7th, the same day that Anthropic announced it was releasing Mythos to a limited number of companies for testing. The group that gained the unauthorized access has not been publicly identified, though Bloomberg reports that its members are part of a Discord channel that seeks out information about unreleased AI models.
The group accessed Mythos by using knowledge of Anthropic’s other model formats obtained from a recent Mercor data breach to make “an educated guess” about its online location. Members have been using Mythos regularly since gaining access — providing screenshots and a live demonstration of the model as evidence to Bloomberg — though reportedly not for cybersecurity purposes in an attempt to avoid detection by Anthropic. Other unreleased Anthropic AI models have also been accessed by the group, according to Bloomberg.

Why America’s HIV epidemic hasn’t ended
- 10 hours ago
Pakistan's economic growth to be about 4pc this fiscal year: FinMin
- 2 hours ago

Govee’s new rechargeable table lamp is less than half the price of Hue’s
- 12 hours ago
Taylor Swift files to trademark her voice, likeness to ward off AI deepfakes
- an hour ago

Microsoft says the ‘idea’ of an Xbox mobile store ‘is not dead’
- 12 hours ago

Twelve South’s magnetic PowerBug charger is down to just $35
- 12 hours ago

Govee’s new colorful outdoor lights are its first with solar power
- 12 hours ago
Pakistan, New Zealand PMs discuss evolving situation in Middle East
- 11 minutes ago
Pakistan expresses concerns over closure of Strait of Hormuz
- 2 hours ago
Oil prices surge 2pc as no end to Iran war stand-off seems in sight
- 2 hours ago

Meta is laying off 10 percent of its staff
- 12 hours ago
PM Shehbaz approves public attendance for PSL playoff matches: Naqvi
- 20 hours ago
You May Like
Trending






