A critical security flaw in the PyTorch machine learning framework has been recently uncovered, posing a serious risk to users. The vulnerability, identified as CVE-2025-32434, allows malicious actors to execute arbitrary code on systems that load AI models, even when protective settings like weights_only=True are enabled. This issue affects all PyTorch versions up to and including 2.5.1, as highlighted in a security advisory released earlier this week. The PyTorch team has since addressed the vulnerability in version 2.6.0, which users can update via pip.
The root cause of this vulnerability lies in the torch.load() function within PyTorch, a crucial component used for loading serialized models. For years, developers have relied on the weights_only=True flag as a safety measure against potentially malicious code embedded in model files. However, security researcher Ji’an Zhou demonstrated that this setting can be bypassed, enabling attackers to execute remote commands. This discovery contradicts PyTorch’s previous documentation, which recommended weights_only=True as a reliable form of protection.
The implications of this security flaw are significant, shedding light on the evolving landscape of ML security. The PyTorch team has urged all users to update to version 2.6.0 immediately and to report any suspicious model behavior. Any application or service that utilizes torch.load() with unpatched PyTorch versions is at risk, including systems designed for inference, federated learning, and model hub integrations. Attackers could potentially upload tampered models to public repositories or inject them into software supply chains, leading to a breach of the target system.
In response to the severity of the vulnerability, security experts have classified CVE-2025-32434 with a CVSS 4.0 score of 9.3, categorizing it as “Critical.” This rating underscores the potential impact of the flaw, as it requires no special privileges or advanced exploitation techniques to be exploited.
To mitigate the risk posed by this vulnerability, the PyTorch team has recommended immediate actions for users to take. These steps include upgrading to PyTorch 2.6.0 promptly, auditing existing AI models, particularly those sourced from third-party or public repositories, and staying informed through official security channels like the PyTorch GitHub Security page and related advisories.
In conclusion, the discovery of the CVE-2025-32434 PyTorch vulnerability serves as a stark reminder of the vulnerabilities present in even widely trusted machine learning frameworks. Users must take proactive measures to secure their systems, including updating to the latest version of PyTorch, conducting thorough model audits, and staying vigilant for any security updates. By following these recommendations, users can protect themselves from the potential risks associated with this critical security flaw.

