Cybercriminals have been quick to adapt their strategies in exploiting large language models (LLMs), with a recent uptick in LLMjacking incidents causing concern. The discovery of LLMjacking by Sysdig TRT back in May 2024 marked the beginning of a trend where attackers have continuously evolved their methods, now targeting new models like DeepSeek and profiting from stolen credentials through proxy services.
The emergence of DeepSeek, an advanced AI model, has attracted the attention of malicious actors. Following the release of DeepSeek-V3 in late December 2024, attackers wasted no time integrating it into OpenAI Reverse Proxy (ORP) instances. Similarly, when DeepSeek-R1 was launched on January 20 this year, threat actors quickly adopted it, demonstrating their ability to swiftly exploit new AI models.
Sysdig TRT’s investigation revealed the presence of multiple ORP instances containing DeepSeek API keys, indicating widespread exploitation. ORP servers, which function as intermediaries for accessing LLM services, have become essential tools for LLMjackers, enabling them to gain unauthorized access to AI models while concealing their identities.
The underground market for stolen AI credentials has thrived, with ORP proxies like vip.jewproxy.tech selling access for $30 per month through a dedicated storefront. The prevalence of these proxies suggests that numerous cybercriminals are leveraging stolen keys to bypass paywalls and reduce the costs of AI usage.
Statistics from a snapshot of one ORP instance are staggering, with over 2 billion total tokens used, amounting to nearly $50,000 in just 4.5 days. This misuse of AI resources not only poses financial risks but also puts legitimate cloud account holders at risk of incurring exorbitant charges from unauthorized AI usage.
The modus operandi of LLMjackers revolves around the abuse of ORP technology, which allows threat actors to route AI requests through reverse proxies, evade detection, and engage in large-scale abuse. Credential theft is another key aspect of LLMjacking operations, with malicious actors obtaining credentials through vulnerable services or exposed software packages before using them to access AI models illicitly.
The rise of LLMjacking has given rise to underground communities focused on sharing tools and techniques for exploiting AI resources. Cybercriminals communicate through platforms like Discord and 4chan, using sites like Rentry.co to distribute access details. Sysdig TRT identified over 20 ORP proxies, some of which use TryCloudflare tunnels to obfuscate their origins, highlighting the sophisticated nature of these operations.
The urgency for better AI security measures is apparent, as unauthorized AI access can lead to data leaks, corporate espionage, and other cyber threats. Cloud-based LLM users can enhance their defenses by implementing stringent access controls, monitoring API usage, and securing credentials to prevent unauthorized access.
As the sophistication of LLMjacking operations continues to evolve, it is crucial for entities to prioritize cybersecurity measures to safeguard their AI resources from exploitation. By staying ahead of these threats, organizations can mitigate the financial and security risks associated with unauthorized AI usage.