A new hacker tool called FraudGPT has emerged on the Dark Web, offering malicious AI chatbot services to cybercriminals. Sold on a subscription basis, FraudGPT is aimed at promoting malicious activities and has been circulating on Telegram since Saturday. The tool starts at $200 per month and goes up to $1,700 per year, with the actor behind it claiming to have more than 3,000 confirmed sales and positive reviews so far. Researchers from Netenrich first discovered the ads for FraudGPT and published a post about it on July 25th.
ChatGPT and FraudGPT are part of a growing trend where threat actors are incorporating generative AI features into their tools. These tools are trained on large datasets and can generate human-like text based on the input they receive. WormGPT, another similar AI-driven hacker tool, has been in circulation since July 13th. These tools enable attackers to use AI to their advantage when crafting phishing campaigns and generating messages to deceive victims into falling for scams.
FraudGPT offers a wide range of functionalities for threat actors, including writing malicious code, creating undetectable malware, finding non-VBV bins (a term used in credit card fraud), creating phishing pages, building hacking tools, finding hacking groups and markets, writing scam pages and letters, finding leaks and vulnerabilities, and learning to code or hack. One of its main use cases, as highlighted by Netenrich, is the creation of convincing phishing campaigns. The tool’s proficiency in this area was showcased in promotional material found on the Dark Web, where it demonstrated the ability to produce draft emails that entice recipients to click on malicious links.
ChatGPT, which FraudGPT is based on, also has ethical safeguards in place to limit its use as a hacker tool. However, the emergence of FraudGPT and WormGPT highlights the ease with which threat actors can re-implement the same technology without these safeguards. This phenomenon has been referred to as “generative AI jailbreaking for dummies” by security experts, as attackers are misusing generative AI apps to bypass ethical guardrails.
The prevalence of AI-driven tools like FraudGPT and WormGPT provides cybercriminals with the ability to operate at greater speed and scale. Phishing campaigns can be generated quickly and launched simultaneously, posing a significant threat to organizations. However, conventional security protections can still detect AI-enabled phishing and subsequent actions by threat actors. Implementing a defense-in-depth strategy and utilizing security telemetry for fast analytics can help organizations identify and mitigate phishing attacks before they escalate.
Some security professionals advocate for the use of AI-based security tools to combat adversarial AI. By leveraging AI technology themselves, defenders can better defend against the increased sophistication of the threat landscape. As the adoption of generative AI tools continues to grow, organizations must stay vigilant and continuously adapt their cybersecurity strategies to counter these emerging threats.

