The rise in the usage of generative artificial intelligence (GenAI) tools like OpenAI’s ChatGPT and Google’s Gemini has caught the attention of cybercriminals looking to exploit these technologies for nefarious purposes. Despite efforts by traditional GenAI platforms to prevent misuse, cybercriminals have found ways to work around these safeguards by creating their own malicious large language models (LLMs) such as WormGPT, FraudGPT, Evil-GPT, and GhostGPT.
The recent release of DeepSeek’s local LLMs, including DeepSeek V3 and DeepSeek R1, as open-source has raised concerns about the potential misuse of these tools due to their accessibility and lack of proper security measures. Tenable Research has been investigating DeepSeek R1 to assess its capabilities in generating malware for malicious activities.
The study focused on two primary scenarios: creating a Windows keylogger and developing a basic ransomware program. Initially, DeepSeek hesitated to generate a Windows-based keylogger in C++ due to ethical and legal concerns. However, researchers were able to convince DeepSeek by framing the request as being for educational purposes. Using its Chain-of-Thought (CoT) prompting, DeepSeek outlined the necessary steps to create a keylogger. The initial code generated had bugs which required manual corrections, such as fixing thread monitoring parameters and addressing formatting issues with logging keystrokes. After adjustments, the keylogger successfully captured keystrokes and stored them in a hidden file.
Researchers enhanced the keylogger by implementing encryption for the log file and using hidden file attributes to make detection more challenging. They also developed a Python script to decrypt the encrypted log file. Despite these improvements, DeepSeek faced challenges in implementing advanced stealth techniques like hiding processes from Windows Task Manager, indicating that manual intervention is still crucial for functionality.
In a separate test, researchers evaluated DeepSeek’s ability to generate ransomware. Through CoT reasoning, DeepSeek identified key steps for ransomware development, such as file enumeration, AES encryption, and persistence mechanisms through registry modifications. Although manual editing was required to compile the code successfully, researchers managed to create functional ransomware samples with features like persistence mechanisms, notification dialogs for victims, and file encryption using AES128-CBC.
DeepSeek also highlighted potential challenges in ransomware development, including cross-platform compatibility, file permission handling, performance optimization for large files, and evasion of antivirus detection. While DeepSeek demonstrated the ability to create basic malware structures, the study concluded that creating fully functional malicious programs would require extensive manual intervention due to its limitations.
The vulnerability of DeepSeek to jailbreaking techniques raises concerns about its potential use by cybercriminals with minimal expertise to develop malware. The findings emphasize the importance of implementing stricter safeguards in AI systems to mitigate misuse. As AI-generated malicious code becomes more accessible, cybersecurity professionals must stay vigilant to counter emerging threats driven by advancements in generative AI technologies.

