The developer responsible for creating the malicious chatbot FraudGPT is expanding their repertoire with even more advanced adversarial tools based on generative AI and Google’s Bard technology. One of these upcoming tools, named DarkBART, will utilize a large language model (LLM) that draws its knowledge base from the entirety of the Dark Web.
The discovery of another AI-based hacking tool, known as WormGPT, led an ethical hacker to tip off researchers about the plans of FraudGPT’s creator, who goes by the alias “CanadianKingpin12” on hacker forums. According to SlashNext, the inventor is currently working on two new malicious chatbots called DarkBART and DarkBERT. These bots will provide threat actors with AI capabilities similar to ChatGPT, but with even more advanced features than existing cybercriminal AI offerings.
In a blog post published on August 1, SlashNext warned that these AI-powered bots could significantly lower the barrier of entry for aspiring cybercriminals. The bots will enable them to develop sophisticated business email compromise (BEC) phishing campaigns, exploit zero-day vulnerabilities, identify weaknesses in critical infrastructure, distribute malware, and perform other nefarious activities.
Daniel Kelley, a researcher at SlashNext, emphasized the rapid progression from WormGPT to FraudGPT and now to DarkBERT in less than a month. This highlights the substantial influence of malicious AI on the cybersecurity and cybercrime landscape.
DarkBART, a dark version of the Google BART AI, will be based on a large language model called DarkBERT. This model was created by South Korean data-intelligence firm S2W with the intention of combatting cybercrime. Currently, DarkBERT is only accessible to academic researchers, making unauthorized access notable.
CanadianKingpin12 claims to have gained access to DarkBERT and demonstrated its capabilities through a video. The training of this version of DarkBERT involved a vast corpus of text from the Dark Web. Additionally, CanadianKingpin12 said that the new bot can be integrated with Google Lens, allowing the sending of text accompanied by images. This is a significant development as previous ChatGPT-like AI offerings have been limited to text-only interactions.
The second adversarial tool, also named DarkBERT but unrelated to the Korean AI, utilizes the entirety of the Dark Web as its LLM. This grants threat actors access to the collective knowledge and tactics of the underground hacker community. CanadianKingpin12 claims that it will also feature Google Lens integration.
Kelly pointed out that, much like their benevolent counterparts, developers of adversarial AI tools may soon provide application programming interface (API) access to the chatbots. This will allow for seamless integration into cybercriminal workflows and code, further lowering the barriers to entry in the world of cybercrime.
Consequently, a proactive approach will be necessary to defend against these evolving threats. In addition to traditional training to identify phishing attacks, organizations should provide BEC-specific training to educate employees on the nature of these attacks and the role of AI. Furthermore, enterprises should enhance email verification measures by implementing strict processes and keyword-flagging to combat AI-driven threats.
As cyber threats continue to evolve, cybersecurity strategies must adapt to counter emerging threats. A proactive and educated approach will be the most effective weapon against AI-driven cybercrime, according to Kelley.
In summary, the developer behind FraudGPT is expanding their malicious AI offerings with DarkBART and DarkBERT. These chatbots will provide threat actors with advanced AI capabilities that can be integrated with Google Lens. They will grant cybercriminals the ability to carry out sophisticated attacks, highlighting the need for proactive cybersecurity measures.

