HomeCyber BalkansSlopsquatting: Die neue Cyber-Bedrohung durch KI

Slopsquatting: Die neue Cyber-Bedrohung durch KI

Published on

spot_img

Researchers from the University of Texas in San Antonio, Virginia Tech, and the University of Oklahama have issued a warning about a new threat to the software supply chain called “Slopsquatting.”

The term “Slopsquatting” was coined by Seth Larson, a security developer at the Python Software Foundation (PSF), as it resembles the technique of Typosquatting. Instead of relying on a user’s mistake, as is the case with Typosquats, threat actors rely on the error of an AI model.

This threat arises when generative AI models like LLMs suggest non-existent software packages, known as package hallucinations. Attackers can exploit these gaps to deliver malicious packages under the hallucinated names.

Hallucinated package names in AI-generated code can be exploited by attackers by registering these names and spreading malware. Since many developers follow AI recommendations without thorough scrutiny, a security risk emerges.

These package hallucinations are particularly dangerous as they have proven to be persistent, repetitive, and believable.

The analysis of 16 code generation models found that about 20 percent of the recommended software packages were identified as fakes. It is notable that CodeLlama often had hallucinations, with over a third of the entries being incorrect. GPT-4 Turbo performed the best with only 3.59 percent hallucinations.

In the study, it was found that open-source models like DeepSeek and WizardCoder hallucinate at a higher frequency on average compared to commercial models like GPT 4. The average prevalence of hallucinations in open-source models is 21.7 percent, while it is 5.2 percent in commercial models.

The study shows that many package hallucinations from AI models are repeatable and not random, making them particularly useful for attackers. Approximately 38 percent of the hallucinated package names resemble real packages, while only 13 percent are simple typos. This makes many names semantically convincing.

Out of 500 repeated prompts that had previously led to hallucinated packages, the hallucinations occurred in 43 percent of cases in ten consecutive rounds. In 58 percent of cases, the hallucinations occurred in more than one round. The researchers criticize the models for being more vulnerable due to insufficient security testing, as carried out by organizations like OpenAI.

While there have been no confirmed cases of Slopsquatting attacks so far, researchers highlight the potential risks associated with this threat. It is crucial for organizations to be aware of these vulnerabilities in the software supply chain and take necessary precautions to mitigate the risks posed by Slopsquatting.

Source link

Latest articles

Mature But Vulnerable: Pharmaceutical Sector’s Cyber Reality

In a digital world where every click can open a door for attackers,...

The Hidden Lag Killing Your SIEM Efficiency

 If your security tools feel slower than they should, you’re not imagining it....

AI-fueled cybercrime may outpace traditional defenses, Check Point warns

 As AI reshapes industries, it has also erased the lines between truth and...

When Your “Security” Plugin is the Hacker

Source: The Hacker NewsImagine installing a plugin that promises to protect your WordPress...

More like this

Mature But Vulnerable: Pharmaceutical Sector’s Cyber Reality

In a digital world where every click can open a door for attackers,...

The Hidden Lag Killing Your SIEM Efficiency

 If your security tools feel slower than they should, you’re not imagining it....

AI-fueled cybercrime may outpace traditional defenses, Check Point warns

 As AI reshapes industries, it has also erased the lines between truth and...