Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development,
The Future of AI & Cybersecurity
AI Enthusiasts Haven’t Used Model to Probe for Vulns, Source Tells Bloomberg

Recent reports have emerged revealing that an unauthorized group of users has managed to gain access to the Claude Mythos Preview artificial intelligence model. This comes shortly after its developer, the AI firm Anthropic, classified the model as too dangerous for public release, declaring it a significant innovation in AI technology. According to sources cited by Bloomberg, the covert group has been employing the model since its unveiling.
Anthropic’s announcement of the Mythos model earlier this month created quite the buzz in the tech community. The firm decided to provide limited access to a select group of companies, which are part of an initiative dubbed “Project Glasswing.” These companies, including technological giants like Nvidia, Apple, Amazon, and Cisco, were given the exclusive understanding that they would utilize the model for identifying and rectifying security vulnerabilities. This strategic move aims to prevent potential hostile actors from harnessing similarly advanced technology.
A Bloomberg source further elaborated that this unauthorized group operates within a private Discord channel dedicated to unreleased AI models. Interestingly, a member from this channel disclosed that, despite having access to the Mythos model, they have not engaged in using it to seek out new exploits in technology.
Anthropic has promoted the Mythos model based on its purported ability to detect vulnerabilities, a claim that has garnered some external endorsement. The AI Security Institute in Great Britain has identified Mythos as “a step up over previous frontier models,” adding credibility to Anthropic’s assertions about the model’s capabilities.
In terms of how this unauthorized access was achieved, the source revealed that the Discord group employed various methods. One possible tactic included leveraging access that a third-party contractor has to Anthropic’s resources. The group reportedly made educated guesses about the model’s online location, drawing from knowledge related to the organizational structure and formats previously used by Anthropic for other models. This information may have been partially acquired through a recent breach at the AI startup Mercor, which exposed certain sensitive data.
In response to these alarming revelations, an Anthropic spokesperson confirmed that an investigation is currently underway to fully understand the implications of this breach. However, they also stated that there is currently no evidence indicating that unauthorized use of Mythos has occurred beyond the third-party environment mentioned. Furthermore, it was noted that the unauthorized group purportedly also has access to other yet-to-be-released models from Anthropic.
The developments surrounding Mythos have not gone unnoticed by rival companies. Shortly after Anthropic’s selective release, OpenAI announced the launch of GPT-5.4-Cyber, aiming to make it “as widely available as possible.” OpenAI asserted that it would implement measures such as user identity verification and various “trust signals” to deter misuse of its AI model intended for vulnerability detection.
The ongoing developments in the realm of AI, particularly regarding vulnerability detection and security measures, hold significant implications for future technologies. The interplay between innovation and security will undoubtedly continue to shape the landscape of artificial intelligence systems, influencing how companies approach the management of potentially dangerous tech.
This story exemplifies the critical discussion around accountability and security in AI advancements. As companies strive to harness the potential of artificial intelligence for societal benefits, they must also grapple with the risks posed by unauthorized access and misuse. The situation underscores the urgent need for robust security measures in the rapidly evolving landscape of AI technologies.
With reporting from ISMG’s David Perera in Northern Virginia.

