HomeSecurity ArchitectureDark Web Exposes Numerous ChatGPT Abuse Plans

Dark Web Exposes Numerous ChatGPT Abuse Plans

Published on

spot_img
Dark Web Exposes Numerous ChatGPT Abuse Plans

Cybersecurity researchers at Kaspersky have identified more than 3,000 posts on the dark web where threat actors are seeking to exploit or abuse ChatGPT, a chatbot powered by Artificial Intelligence (AI) from OpenAI.

According to reports from Kaspersky Digital Footprint Intelligence service, the dark web has evolved from a hub for stolen data and illicit transactions to a hotspot for developing AI-powered cybercrime tools.

Throughout the year 2023, researchers at Kaspersky have observed discussions on dark web forums revolving around the exploitation of AI technologies for illegal activities. Threat actors have been sharing details regarding successful jailbreaks via dark web channels and are exploiting legitimate AI tools for malicious purposes.

The findings assert that AI technologies are being used in cybercriminal activities, with threat actors selling jailbroken versions of popular AI tools. It is alarming that cybercriminals have developed a preference for a tool known as ChatGPT, which is a Large Language Model (LLM) that can simplify tasks and increase information accessibility. However, its usage by cybercriminals poses new challenges in terms of cybersecurity risks.

Alisa Kulishenko, a digital footprint analyst at Kaspersky, explained that threat actors are exploring various schemes to implement ChatGPT and AI for illegal activities, which include the development of malware and the illicit use of language models.

Researchers further observed that cybersecurity threats such as FraudGPT and malicious chatbots have dominated discussions in dark web forums, with nearly 3,000 related posts identified throughout 2023, reaching a peak in March of that year. Additionally, there has been significant activity in discussions surrounding stolen ChatGPT accounts, with over 3,000 related advertisements.

Notably, cybercrime forums regularly featured discussions on using ChatGPT for illegal activities between January and December 2023, according to Kaspersky’s report. One post suggested using GPT to generate polymorphic malware, which is more difficult to detect and analyze than regular malware.

Moreover, the use of the OpenAI API to generate code with specific functionality while bypassing security checks was also highlighted, with the potential for the creation of undetectable malware. It was also mentioned that AI-powered bots are capable of creating personalized phishing emails, as well as leveraging deepfake technology to create hyper-realistic simulations.

Furthermore, researchers noted that ChatGPT-like tools are integrated into standard tasks in cybercriminal forums, with threat actors using jailbreaks to unlock additional functionalities. Throughout 2023, 249 offers to sell prompt sets were identified, raising concerns among developers about scams and phishing pages offering access to these tools.

The rise of AI-powered cybercrime tools poses a significant challenge for cybersecurity experts and law enforcement agencies, as traditional detection methods are rendered ineffective against such advanced threats. New strategies and tools must be developed to combat these high-tech threats.

In order to address this emerging threat, there is a need to invest in AI-powered security solutions and to be cautious when dealing with suspicious emails, links, and attachments. Staying informed about the latest trends in cybercrime and AI is essential in order to stay ahead of potential threats.

In light of the rapid evolution of AI-powered cybercrime tools, it is crucial for the cybersecurity community to remain vigilant and proactive in developing effective countermeasures to combat these emerging threats. By staying ahead of the curve, it is possible to minimize the impact of these advanced cybercrime tools on individuals and organizations.

Source link

Latest articles

PKfail: A Recently Discovered Pathway for Firmware Malware

Hundreds of laptop and server models from mainstream manufacturers are currently facing a security...

India Postal System Users Targeted by Mobile Phishing Attack

A recent wave of phishing attacks originating from a China-based hacking group known as...

Understanding ERP Security and Its Importance

ERP security is a critical aspect that organizations need to pay close attention to...

Chainguard Secures $140M to Accelerate AI Support and Global Expansion – Source: www.databreachtoday.com

Chainguard, a supply chain security startup based in Kirkland, Washington, is making significant strides...

More like this

PKfail: A Recently Discovered Pathway for Firmware Malware

Hundreds of laptop and server models from mainstream manufacturers are currently facing a security...

India Postal System Users Targeted by Mobile Phishing Attack

A recent wave of phishing attacks originating from a China-based hacking group known as...

Understanding ERP Security and Its Importance

ERP security is a critical aspect that organizations need to pay close attention to...
en_USEnglish