HomeSecurity OperationsHackers gather on the dark web to strategize on weaponizing Generative AI...

Hackers gather on the dark web to strategize on weaponizing Generative AI tools

Published on

spot_img
Hackers gather on the dark web to strategize on weaponizing Generative AI tools

Cybercriminals have shifted their focus to the latest innovation of Generative AI, particularly towards ChatGPT and other similar solutions. The surge in the use of these models by individuals to create content and develop IT solutions has caught the attention of hackers who see it as a new avenue for their malicious activities.

According to cybersecurity experts at Kaspersky, the dark web, known for being an illegal marketplace where hackers trade stolen IT data and sensitive personal information, is currently flooded with around 3,000 posts related to Generative AI. These posts discuss various schemes, from creating malicious versions of chatbots to jailbreaking the original models and more. Stolen ChatGPT accounts and services that offer automated creation of these accounts in bulk are also being promoted on dark web channels, further highlighting the interest of threat actors in exploiting AI technologies for their own gain.

The discussions on the dark web surrounding the use of ChatGPT for illicit activities and tools that leverage AI technologies have continued even after peaking in March, as revealed by Kaspersky Digital Footprint Intelligence. Threat actors are actively exploring different ways to implement ChatGPT and AI models for malicious purposes, with topics ranging from developing malware to using language models for processing stolen user data and extracting files from infected devices.

In addition to ChatGPT, attention is also being directed towards other projects like XXXGPT and FraudGPT, which are being marketed on the dark web as alternatives to ChatGPT with added features and no original limitations. These alternative language models are gaining traction among cybercriminals looking for more efficient ways to exploit AI technologies for their malicious activities.

One significant threat identified by Kaspersky is the market for selling accounts of the paid version of ChatGPT. In 2023, an additional 3,000 posts advertising ChatGPT accounts for sale were discovered across the dark web and shadow Telegram channels. These posts either distribute stolen accounts or promote services that automatically create accounts in large numbers upon request. While AI tools themselves are not inherently dangerous, cybercriminals are finding ways to leverage these language models to lower the entry barrier into cybercrime and potentially increase the frequency of cyberattacks.

Overall, the increasing interest of hackers in exploiting Generative AI solutions like ChatGPT for illegal activities underscores the need for enhanced cybersecurity measures and vigilance in the digital space. As technology continues to advance, it becomes imperative for individuals and organizations to stay informed and proactive in protecting their data and systems from evolving cyber threats.

Source link

Latest articles

Achieving victory against cybercrime

Enterprises around the world are facing a dilemma as they navigate the complex landscape...

Number of Victims in FBCS Data Breach Grows to 4.2 Million

Financial Business and Consumer Solutions (FBCS) recently disclosed that the number of individuals impacted...

Bhojon Restaurant Management System 2.7 Vulnerable to Insecure Direct Object Reference

The Bhojon restaurant management system version 2.7 has been found to have an insecure...

North Korean Hackers Aim for Military Advantage by Targeting Critical Infrastructure

The global cybersecurity community has been put on high alert, as the UK, US,...

More like this

Achieving victory against cybercrime

Enterprises around the world are facing a dilemma as they navigate the complex landscape...

Number of Victims in FBCS Data Breach Grows to 4.2 Million

Financial Business and Consumer Solutions (FBCS) recently disclosed that the number of individuals impacted...

Bhojon Restaurant Management System 2.7 Vulnerable to Insecure Direct Object Reference

The Bhojon restaurant management system version 2.7 has been found to have an insecure...
en_USEnglish