The use of Large Language Models (LLMs) by threat actors to create malicious code for cyberattacks is on the rise, with recent campaigns delivering a variety of payloads through phishing emails. Adversaries have been deploying malware such as Rhadamanthys, NetSupport, CleanUpLoader, ModiLoader, LokiBot, and Dunihi using automated AI-generated scripts, signaling a dangerous trend in cybercrime.
These attacks present significant challenges for cybersecurity defenses, as the use of LLMs enables threat actors to quickly generate and distribute malware through social engineering tactics. Phishing emails with password-protected ZIP archives containing malicious LNK files have been a common method of delivery, with the files executing PowerShell scripts created by LLMs upon opening. This automation streamlines the process of malware deployment and allows attackers to conceal their activities within seemingly legitimate documents.
Researchers have found that these LLM-generated scripts are well-formatted and include descriptive comments, indicating a high level of sophistication in the attack methods. Tools like ChatGPT have been used to replicate this automatic script generation, showcasing the ease with which attackers can leverage AI for malicious purposes.
The final payloads of these campaigns have included advanced malware such as Rhadamanthys and CleanUpLoader, revealing the extent to which threat actors are leveraging AI technology. By automating the creation and distribution of malware, cybercriminals can scale their operations and launch more sophisticated attacks.
One example of these attacks involves phishing emails disguised as HR notifications, luring recipients into opening malicious attachments that initiate the infection process. The use of social engineering tactics, such as urgency or impersonation, increases the chances of recipients engaging with the emails and falling victim to the attacks.
The malicious attachments often contain LLM-generated HTML files with embedded JavaScript, acting as initial infection vectors to fetch and execute additional payloads. Despite the simple appearance of these webpages, the underlying code demonstrates the automated nature of the attack, highlighting the potential for LLMs to significantly increase the volume of malicious content produced.
In some instances, attackers have used LLMs to generate HTML code for phishing campaigns that silently download malware loaders like Dunihi (H-Worm) onto users’ systems without their knowledge. This versatile campaign has the capability to deliver multiple payloads, showcasing the evolving tactics of cybercriminals in leveraging AI for malicious purposes.
As AI continues to advance, the threat landscape is expected to evolve, featuring more potent and scalable attacks that require robust countermeasures to mitigate risks. Symantec warns that cybercriminals will increasingly use AI to craft sophisticated phishing attacks and generate malicious code, underlining the urgent need for enhanced cybersecurity measures to defend against these evolving threats.
