HomeCII/OTAttackers can use GhostGPT to write malicious code for $50

Attackers can use GhostGPT to write malicious code for $50

Published on

spot_img

A revolutionary AI chatbot known as GhostGPT has emerged on the scene, providing cybercriminals with a powerful tool for developing malware, conducting business email compromise scams, and engaging in other illegal activities. This uncensored AI model, similar to previous iterations like WormGPT, is designed to bypass traditional security measures and ethical constraints typically associated with mainstream AI systems such as ChatGPT and Microsoft Copilot.

Abnormal Security researchers have highlighted the dangers posed by GhostGPT, emphasizing its ability to generate malicious code and provide unfiltered responses to sensitive queries. The blog post released by the researchers this week reveals that GhostGPT is being actively marketed for various nefarious activities, including coding, malware creation, and exploit development. Moreover, the chatbot can craft convincing emails for business email compromise scams, making it a convenient tool for cybercrime. In a test conducted by the security vendor, GhostGPT successfully generated a highly convincing Docusign phishing email, showcasing its capabilities.

The availability and affordability of GhostGPT have contributed to its widespread adoption among cybercriminals. Priced at $50 for one week, $150 for one month, and $300 for three months, this uncensored AI model promises quick responses to queries without the need for any jailbreak prompts. Additionally, the authors of GhostGPT claim that the chatbot does not maintain user logs or record user activity, making it an attractive option for individuals seeking to conceal their illicit actions.

Rogue AI chatbots like GhostGPT present a significant challenge for security organizations, as they lower the barrier for entry into cybercrime. These tools empower individuals with minimal coding skills to generate malicious code effortlessly, while also enhancing the capabilities of more experienced threat actors. By eliminating the need to jailbreak AI models like ChatGPT, these chatbots enable bad actors to engage in harmful and malicious behavior without significant effort.

The emergence of GhostGPT follows a trend of malicious AI models designed explicitly for illegal activities. Previous iterations like WormGPT and EscapeGPT have attempted to monetize their services in cybercrime marketplaces, with varying degrees of success. The security vendor assessing GhostGPT suspects that it may be connected to a jailbroken version of ChatGPT or another open-source language model, highlighting the lack of transparency surrounding the chatbot’s origins.

As the popularity of GhostGPT grows within underground circles, its creator(s) have become more cautious, deactivating promotional accounts and shifting to private sales. The identity of the individual(s) behind GhostGPT remains obscured, as sales threads on cybercrime forums have been closed, leaving investigators with limited information about the chatbot’s creators. This veil of secrecy surrounding GhostGPT raises concerns about the proliferation of AI-powered tools in the hands of cybercriminals and underscores the ongoing challenge faced by security professionals in combating evolving threats in the digital landscape.

Source link

Latest articles

Data breach at Vorwerk: Hackers steal Thermomix user data

In a recent cybersecurity breach, hackers have managed to gain access to user data...

Behavioral Analytics in Cybersecurity: Identifying the Primary Beneficiaries

In the realm of cybersecurity, the cost of a data breach hit a new...

Britain Reportedly Requests Apple to Create Backdoor

In a shocking turn of events, the British government has reportedly issued a secret...

Vorwerk Data Breach: Hackers Steal Thermomix User Data

Hacker haben sich Zugriff auf Thermomix-Nutzerdaten verschafft Ein kürzlich aufgetretener Datenskandal hat Thermomix-Nutzer auf der...

More like this

Data breach at Vorwerk: Hackers steal Thermomix user data

In a recent cybersecurity breach, hackers have managed to gain access to user data...

Behavioral Analytics in Cybersecurity: Identifying the Primary Beneficiaries

In the realm of cybersecurity, the cost of a data breach hit a new...

Britain Reportedly Requests Apple to Create Backdoor

In a shocking turn of events, the British government has reportedly issued a secret...