HomeRisk ManagementsCyberattackers can use GhostGPT to write malicious code for $50

Cyberattackers can use GhostGPT to write malicious code for $50

Published on

spot_img

In recent news, a new AI chatbot called GhostGPT has emerged as a tool for cybercriminals to develop malware, conduct business email compromise scams, and engage in other illicit activities. This chatbot, like its predecessor WormGPT, is an uncensored AI model designed to bypass typical security measures and ethical boundaries found in mainstream AI systems such as ChatGPT, Claude, Google Gemini, and Microsoft Copilot.

According to researchers from Abnormal Security, GhostGPT allows bad actors to generate malicious code and obtain unfiltered responses to sensitive or harmful queries that would typically be blocked by traditional AI systems. The chatbot is being marketed for a variety of malicious activities, including coding, malware creation, and exploit development. It can also be used to craft convincing emails for business email compromise (BEC) scams, making it a convenient tool for cybercrime.

Abnormal Security first discovered GhostGPT for sale on a Telegram channel in mid-November, and since then, the chatbot has gained popularity among cybercriminals. Pricing models for GhostGPT range from $50 for one week of usage to $300 for three months. Users receive an uncensored AI model that promises quick responses and can be used without jailbreak prompts. Additionally, the creators of GhostGPT claim that the chatbot does not track user activity, making it attractive to those seeking to conceal their illegal actions.

The rise of rogue AI chatbots like GhostGPT poses a significant challenge for security organizations. These tools lower the barrier for entry into cybercrime, allowing individuals with minimal coding skills to generate malicious code easily. Furthermore, those with existing coding abilities can enhance their capabilities and improve their malware and exploit code with the help of these chatbots. The emergence of these rogue AI models eliminates the need for individuals to jailbreak GenAI models to engage in harmful and malicious behavior.

Since the introduction of ChatGPT in recent years, several other “evil” AI models have surfaced, including WormGPT, EscapeGPT, and FraudGPT. However, many of these models have failed to gain traction due to unfulfilled promises or being jailbroken versions of existing models. Abnormal Security suspects that GhostGPT may also be using a similar method to connect to a jailbroken version of ChatGPT or another open-source large language model.

Despite being similar to other uncensored variants, GhostGPT’s specific features may vary compared to its counterparts. While EscapeGPT relies on jailbreak prompts, WormGPT was a fully customized large language model for malicious intent. The lack of transparency surrounding GhostGPT’s origins makes it challenging to definitively compare it to other variants.

As GhostGPT gains popularity in the underground market, its creators have become more cautious. Accounts promoting the chatbot have been deactivated, and sales threads on cybercrime forums have been closed, making it difficult to identify the individuals behind GhostGPT. The shifting landscape of rogue AI chatbots continues to be a concerning trend for cybersecurity professionals.

In conclusion, the proliferation of GhostGPT and similar rogue AI chatbots presents a growing challenge for cybersecurity efforts. As cybercriminals leverage these tools to facilitate illegal activities, security organizations must remain vigilant in detecting and mitigating the threats posed by these uncensored AI models.

Source link

Latest articles

Real-World AD Breaches and the Future of Cybersecurity

Large Language Models (LLMs) are revolutionizing the field of penetration testing, employing their advanced...

There are more cyber attacks with less loot

Experts across all industries have noted a significant increase in awareness regarding the risks...

Canadian Man Faces Charges in $65 Million Cryptocurrency Hacking Scams

A Canadian man, Andean Medjedovic, found himself at the center of a criminal indictment...

Ransomware payment value decreased by over 30% in 2024

After a year of record payments to cyber criminals, the tide seems to be...

More like this

Real-World AD Breaches and the Future of Cybersecurity

Large Language Models (LLMs) are revolutionizing the field of penetration testing, employing their advanced...

There are more cyber attacks with less loot

Experts across all industries have noted a significant increase in awareness regarding the risks...

Canadian Man Faces Charges in $65 Million Cryptocurrency Hacking Scams

A Canadian man, Andean Medjedovic, found himself at the center of a criminal indictment...