CyberSecurity SEE

Hugging Face AI Platform Infested With 100 Malicious Code-Execution Models

Hugging Face AI Platform Infested With 100 Malicious Code-Execution Models

Researchers have recently identified a concerning trend that threatens the security of users accessing the Hugging Face artificial intelligence (AI) platform. Approximately 100 machine learning (ML) models have been found to contain malicious code that could potentially be used by attackers to compromise user machines. This alarming discovery sheds light on the risks associated with publicly available AI models being manipulated for malicious purposes.

The team at JFrog Security Research has been conducting extensive research on the potential security threats posed by ML models uploaded to platforms like Hugging Face. In a blog post published this week, they detailed their findings and the implications of these malicious models. By scanning the model files uploaded to the repository, researchers were able to pinpoint models containing harmful payloads that could harm users’ systems.

One such example highlighted in the post was a PyTorch model uploaded by a user named baller423, whose account has since been deleted. This model allowed attackers to inject arbitrary Python code into critical processes, potentially leading to malicious behavior when the model is loaded onto a user’s machine. This discovery underscores the significant risks associated with tainted AI models and the potential consequences for unsuspecting users.

Furthermore, the malicious model uploaded by baller423 initiated a reverse shell connection to a real IP address, further raising concerns about the intentions behind these malicious models. The IP address range was traced back to Kreonet, a high-speed network in South Korea dedicated to research and educational purposes. However, the presence of such malicious code within AI models underscores the need for heightened security measures to protect users from potential threats.

Following the removal of the malicious model, researchers uncovered additional instances of similar payloads with varying IP addresses, highlighting the widespread nature of this security threat. The identification of approximately 100 potentially harmful models on Hugging Face serves as a stark reminder of the ongoing risks associated with malicious AI models and the importance of proactive security measures to mitigate these threats.

To provide context on how attackers can exploit Hugging Face ML models, it is crucial to understand the mechanisms behind these malicious activities. For instance, loading PyTorch models using transformers can create vulnerabilities that allow attackers to insert arbitrary code into the deserialization process. The utilization of certain formats, such as pickle files, can facilitate the execution of malicious code when models are loaded, posing a significant risk to users.

While platforms like Hugging Face have implemented security protections to detect malware and unsafe models, the threat of malicious AI code persists. Developers must remain vigilant and employ additional tools, such as Huntr, to identify and address vulnerabilities in AI models. By collaborating on bug-bounty platforms and enhancing security protocols, the AI community can work together to safeguard users and organizations from potential risks associated with poisoned AI models.

In conclusion, the discovery of malicious ML models on the Hugging Face platform underscores the need for continued vigilance and proactive security measures in the AI community. By taking steps to identify and mitigate security threats, researchers and developers can protect users from potential harm and uphold the integrity of AI platforms. The ongoing efforts to fortify security protocols and address vulnerabilities in AI models are essential to safeguard the future of AI technology and minimize the risks associated with malicious code injections.

Source link

Exit mobile version