Artificial Intelligence & Machine Learning,
Geo-Specific,
Next-Generation Technologies & Secure Development
EU AI Regulation May Hold Implications for Powerful New Anthropic Model

The recent announcement from Anthropic regarding the release of its artificial intelligence model, Claude Mythos Preview, has created a significant stir in both the technology and policy spheres. This innovative model, which demonstrates strong capabilities in identifying and exploiting software vulnerabilities, will only be made accessible to select tech vendors. This strategic approach allows these companies to enhance their products’ security measures before malicious actors can exploit any weaknesses.
Anthropic’s restricted rollout, known as Project Glasswing, includes notable participants such as Apple, Microsoft, and Cisco, along with an additional 40 organizations engaged in developing or maintaining critical software infrastructure. Furthermore, conversations have taken place between Anthropic and U.S. government officials regarding the implications of the model. In contrast, European leaders have also expressed significant interest in the model, particularly following the enactment of new legislation that could impact Anthropic’s operational strategy concerning high-risk systems.
Thomas Regnier, a spokesperson for the European Commission, communicated the organization’s proactive stance, stating, “We are currently assessing possible implications in light of EU policies and legislation.” He further emphasized the importance of monitoring the security implications of rapidly evolving technology to bolster cyber defenses while also remaining vigilant against potential misuse.
Claude Mythos Preview marks Anthropic’s first major announcement post an overhaul of its “responsible scaling policy” in February. The company retracted its previous commitment to refrain from developing and releasing models unless they could guarantee effective risk mitigation. Anthropic’s Chief Scientific Officer, Jared Kaplan, articulated the need for this policy shift, noting that it was impractical to hold back while competitors advanced in the market.
Despite this change and the apparent absence of stringent AI regulations in the United States, the new AI regulations introduced by European authorities require careful consideration from Anthropic and other vendors of “general purpose AI” as they develop and release models perceived as risky. Of particular relevance are two official documents: the AI Act, which came into effect in August, and the AI code of practice, issued in July. Although pledging adherence to these guidelines is voluntary, Anthropic has committed to compliance.
Within the system card for Mythos Preview, Anthropic asserts that “current risks remain low.” This conclusion is derived from the model’s lack of applications in producing chemical and biological weapons or making substantial advancements in research automation. However, it is conceivable that the model may pose a “systemic risk” as defined by the AI Act, which states such a designation could apply when disruptions occur in critical sectors or threaten public and economic security.
According to the AI code of practice, it is likely that without adequate safety measures in place, Anthropic would be unable to grant a full release of Mythos Preview within Europe. Regnier reiterates, “AI and cybersecurity are closely intertwined,” underscoring that while AI offers groundbreaking solutions for cybersecurity challenges, such models necessitate rigorous research and testing to ensure that proper checks and balances are in place to mitigate potential security risks generated by misuse or malicious actors.
Moreover, the AI Act, alongside the forthcoming Cyber Resilience Act, mandates that Anthropic implement robust cybersecurity protections for the models it produces. Under the provisions of Europe’s AI code of practice, companies are obligated to develop a safety and security framework for their AI models that must be fully disclosed to the European AI Office within five days of confirmation. While the commission has yet to publicize any details regarding Anthropic’s compliance efforts, it is clear that stringent measures are expected.
At least one European government agency has engaged in discussions with Anthropic regarding Mythos Preview. Claudia Plattner, the president of Germany’s Federal Office for Information Security (BSI), indicated that the agency is in ongoing conversations with Anthropic but has yet to test the tool directly. In her emailed statement, she acknowledged the conversations’ importance, highlighting a serious concern over the potential disruption this model may introduce regarding security vulnerabilities and the broader cyber threat landscape.
According to Plattner, the advent of such powerful tools could lead to a scenario where classical software vulnerabilities become a thing of the past, inherently transforming attack vectors and altering the dynamics of cyber threats. This evolution raises critical questions about how long such powerful tools will remain available on the open market, a reality that carries significant implications for national security and European sovereignty.
In its announcements, Anthropic has expressed that securing critical infrastructure is a top national security priority for democratic nations, emphasizing that governmental bodies have a vital role in assessing and mitigating the associated risks linked to AI models. However, specific insights regarding interactions with governments outside the U.S. remain sparse.
Sven Herpig, the cybersecurity lead at the European tech policy think tank Interface, noted that it is likely most European governments will seek to better understand the capabilities of Mythos Preview, particularly to verify Anthropic’s claims. He suggested that it is improbable governments will request to use the model to test their own systems at this juncture, as larger software developers involved in Project Glasswing are already undertaking such evaluations.

