HomeCyber BalkansAI hallucinations cause a new cyber threat: Slopsquatting

AI hallucinations cause a new cyber threat: Slopsquatting

Published on

spot_img

Researchers have identified a new threat in the cybersecurity realm, known as slopsquatting. Coined by Seth Larson, a security developer-in-residence at Python Software Foundation, slopsquatting bears a resemblance to the well-known typosquatting technique. However, instead of capitalizing on a user’s typographical error, threat actors are exploiting mistakes made by AI models.

A recent study revealed that a significant portion of packages recommended in test samples were actually fake – amounting to 19.7% or approximately 205,000 packages. Interestingly, open-source models such as DeepSeek and WizardCoder were found to hallucinate more frequently, averaging at 21.7%, compared to commercial models like GPT 4, which had a lower hallucination rate of 5.2%.

Among the various AI models analyzed, CodeLlama was identified as the worst offender, with over a third of its outputs being hallucinations. On the other hand, GPT-4 Turbo emerged as the top performer, with only 3.59% of its outputs being false.

The implications of slopsquatting are concerning, as fake packages can potentially open the door to malware, data breaches, and other cyber threats. With the increasing reliance on AI models for code generation and other tasks, it is essential for developers and organizations to be vigilant and implement security measures to mitigate the risks associated with slopsquatting.

In response to the growing threat of slopsquatting, cybersecurity experts are urging companies to conduct thorough security assessments of AI models and regularly monitor for any signs of manipulation or exploitation. Additionally, developers are encouraged to exercise caution when using AI-generated code and verify the authenticity of packages before incorporating them into their projects.

As the cybersecurity landscape continues to evolve, staying ahead of emerging threats like slopsquatting will be crucial in safeguarding sensitive data and ensuring the integrity of software development practices. By remaining proactive and informed, organizations can better protect themselves against malicious actors seeking to exploit vulnerabilities in AI models and software systems.

Source link

Latest articles

The Hidden Lag Killing Your SIEM Efficiency

 If your security tools feel slower than they should, you’re not imagining it....

AI-fueled cybercrime may outpace traditional defenses, Check Point warns

 As AI reshapes industries, it has also erased the lines between truth and...

When Your “Security” Plugin is the Hacker

Source: The Hacker NewsImagine installing a plugin that promises to protect your WordPress...

7 Malicious PyPI Packages Abuse Gmail’s SMTP Protocol to Execute Malicious Commands

A highly advanced software supply chain attack has been uncovered, which exploits Python...

More like this

The Hidden Lag Killing Your SIEM Efficiency

 If your security tools feel slower than they should, you’re not imagining it....

AI-fueled cybercrime may outpace traditional defenses, Check Point warns

 As AI reshapes industries, it has also erased the lines between truth and...

When Your “Security” Plugin is the Hacker

Source: The Hacker NewsImagine installing a plugin that promises to protect your WordPress...