HomeCyber BalkansAI hallucinations cause a new cyber threat: Slopsquatting

AI hallucinations cause a new cyber threat: Slopsquatting

Published on

spot_img

Researchers have identified a new threat in the cybersecurity realm, known as slopsquatting. Coined by Seth Larson, a security developer-in-residence at Python Software Foundation, slopsquatting bears a resemblance to the well-known typosquatting technique. However, instead of capitalizing on a user’s typographical error, threat actors are exploiting mistakes made by AI models.

A recent study revealed that a significant portion of packages recommended in test samples were actually fake – amounting to 19.7% or approximately 205,000 packages. Interestingly, open-source models such as DeepSeek and WizardCoder were found to hallucinate more frequently, averaging at 21.7%, compared to commercial models like GPT 4, which had a lower hallucination rate of 5.2%.

Among the various AI models analyzed, CodeLlama was identified as the worst offender, with over a third of its outputs being hallucinations. On the other hand, GPT-4 Turbo emerged as the top performer, with only 3.59% of its outputs being false.

The implications of slopsquatting are concerning, as fake packages can potentially open the door to malware, data breaches, and other cyber threats. With the increasing reliance on AI models for code generation and other tasks, it is essential for developers and organizations to be vigilant and implement security measures to mitigate the risks associated with slopsquatting.

In response to the growing threat of slopsquatting, cybersecurity experts are urging companies to conduct thorough security assessments of AI models and regularly monitor for any signs of manipulation or exploitation. Additionally, developers are encouraged to exercise caution when using AI-generated code and verify the authenticity of packages before incorporating them into their projects.

As the cybersecurity landscape continues to evolve, staying ahead of emerging threats like slopsquatting will be crucial in safeguarding sensitive data and ensuring the integrity of software development practices. By remaining proactive and informed, organizations can better protect themselves against malicious actors seeking to exploit vulnerabilities in AI models and software systems.

Source link

Latest articles

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...

Fortinet Warns of Active Exploitation of FortiOS SSL VPN 2FA Bypass Vulnerability

 Fortinet on Wednesday said it observed "recent abuse" of a five-year-old security flaw in FortiOS...

More like this

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...