CyberSecurity SEE

Mythos and AI Tools Increase Cybersecurity Risks in Healthcare

Mythos and AI Tools Increase Cybersecurity Risks in Healthcare

Experts Warn of Faster and Higher Volume Attacks, Rising Patient Safety Worries

Mythos and AI Tools Increase Cybersecurity Risks in Healthcare
Powerful AI tools, like Anthropic’s new Claude Mythos model, in the hands of threat actors could potentially foster faster attacks and intensify patient safety worries, healthcare cyber experts predict. (Image: Getty Images)

The landscape of healthcare cybersecurity is undergoing a transformation, propelled by a new generation of advanced artificial intelligence tools. Experts are sounding alarms about tools such as Anthropic’s Claude Mythos, which can autonomously identify and exploit software vulnerabilities quickly. According to specialists in the field, the implications are profound, with predictions that these technologies could significantly accelerate the timelines for cyberattacks, posing new risks to patient safety.

As the healthcare sector increasingly relies on a variety of legacy clinical systems and medical devices, many of which are outdated and not equipped with modern security controls, the potential threats multiply. Dave Bailey, vice president of consulting and strategy at Clearwater, has raised concerns about medical devices like imaging systems and infusion pumps. Many of these devices operate on obsolete software and present challenges in patching without disrupting patient care. The emergence of potent AI models capable of rapidly discovering and exploiting zero-day vulnerabilities has the potential to exacerbate these existing weaknesses.

Errol Weiss, the Chief Security Officer of the Health Information Sharing and Analysis Center, has emphasized that healthcare defenders feel chronically behind in the battle against cyber threats. The advent of AI tools akin to Mythos threatens to compress the time for attacks from weeks or days down to mere hours or minutes. Weiss warns that this shift could lead to increased instances of ransomware attacks with significantly less warning time, heightening the risk of simultaneous disruptions across multiple hospitals.

A recent FBI report highlighted that the healthcare and public health sector was the most frequently targeted area in 2025, with 460 reported ransomware incidents. In contrast, critical manufacturing faced 355 attacks, making the healthcare sector a primary focus for cybercriminals. Weiss articulated that the looming concern extends beyond the sheer number of intrusions; it also involves the potential for rapid, coordinated outages that could immediately affect patient care across extensive geographic regions. As AI-assisted offensive strategies evolve, they may outstrip human capabilities in defense unless significant operational changes are made.

The capabilities of the Mythos AI model have raised eyebrows at Anthropic, leading the company to state that its software is currently too hazardous for public dissemination. They are allowing only a select group of tech giants—Microsoft, Google, and Cisco among them—to engage with Mythos Preview through an initiative called Project Glasswing. This consortium of about 40 major software and open-source organizations aims to leverage the AI for defensive security purposes, with an eventual goal of equipping users to deploy such AI-powered models on a large scale.

Experts note that if such advanced AI tools were employed responsibly within healthcare security teams, they could serve as formidable defenses against the vulnerabilities plaguing legacy IT systems typical in hospitals. Bailey highlights that these tools could improve threat detection processes, enabling security teams to identify anomalous behavior across diverse healthcare networks, which has traditionally posed significant analytical challenges.

Moreover, AI could enhance vulnerability management by assigning priorities based on actual operational risks rather than relying solely on traditional risk scoring methods. Weiss envisions a future where AI systems continuously scan extensive codebases—such as Electronic Health Records (EHR) and patient portals—for vulnerabilities that may have slipped through conventional security checks. These tools could stress-test legacy devices in controlled settings and compile prioritized lists of vulnerabilities that need remediation before adversaries exploit them. Additionally, they might aid in incident response by efficiently analyzing logs and determining attack vectors during a crisis.

In a sector that often deals with constrained security resources, leveraging AI to bolster defense mechanisms becomes not just a strategic advantage but a necessity. Alarmingly, it appears that no significant healthcare entities or critical infrastructure organizations are currently involved in the Project Glasswing consortium. Weiss cautions that while managing access to powerful tools like Mythos is understandable, healthcare organizations need to actively engage in these partnerships to maximize benefits and enhance security infrastructure.

The unique challenges facing healthcare—including patient safety and regulatory complexities—demand that the perspective of the healthcare sector be included in the development and deployment of such powerful tools. Bailey echoes this sentiment, stressing that without healthcare-specific input, solutions may be engineered for more straightforward or less regulated environments, potentially leaving healthcare systems vulnerable to unforeseen threats. The insights and requirements of healthcare professionals are crucial to ensuring that safeguards, testing methodologies, and deployment strategies effectively address the realities of clinical settings.

Source link

Exit mobile version