LLMjacking operations have made headlines once again by gaining unauthorized access to DeepSeek models shortly after their public release. This trend of illicitly utilizing computing resources for personal gains has been compared to other similar operations such as proxyjacking and cryptojacking. In this particular case, individuals are harnessing the power of large language models (LLMs) from respected sources like OpenAI and Anthropic to create images, bypass national restrictions, and more, all at the expense of unsuspecting victims.
According to recent findings by researchers at Sysdig, there has been a surge in LLMjacking activities that involve accessing models developed by DeepSeek. After the release of DeepSeek-V3 on December 26 and DeepSeek-R1 on January 20, it took mere days for attackers to obtain stolen access to these models. This development has raised concerns among cybersecurity experts like Crystal Morin, who emphasizes the severity and scale of LLMjacking compared to when it was first uncovered in May.
The process of LLMjacking involves stealing credentials for cloud service accounts or application programming interface (API) keys associated with specific LLM applications. Attackers then use scripts to verify access to desired models before incorporating the stolen authentication information into an “OAI” reverse proxy (ORP). These ORPs act as a secure bridge between users and LLMs, concealing unauthorized activity.
The evolution of ORPs has seen enhancements in security features, including password protections, obfuscation mechanisms, and Cloudflare tunnels that shield the true identity of virtual private servers. This sophisticated setup has given rise to underground communities on platforms like 4chan and Discord, where individuals use illicit LLM access for various purposes, including generating NSFW content, evading censorship in certain countries, and more.
However, the repercussions of LLMjacking extend beyond just the perpetrators. Account holders ultimately foot the bill for the computing resources consumed during these unauthorized activities. ORP developers strategically distribute illicit usage across multiple sets of credentials associated with different accounts to avoid detection. Despite these efforts, anomalies in usage patterns can still trigger alarms, leading to unexpected spikes in bills, as seen in a case where an individual’s AWS bill skyrocketed from $2 to $730 within hours due to LLMjacking.
The victim’s proactive approach in monitoring cost alerts and promptly shutting down the compromised account prevented further financial losses. Nevertheless, this incident serves as a cautionary tale of the potential financial impact of LLMjacking, especially on a larger scale within enterprise environments. With the growing sophistication of these operations, cybersecurity experts like Crystal Morin stress the need for heightened vigilance and robust security measures to combat the ongoing threat of LLMjacking.
