The growing black market for access to large language models (LLMs) has been a cause for concern as attackers are increasingly using stolen cloud credentials to exploit AI services such as Amazon Bedrock, a method known as LLM-Jacking.
Investigations by security provider Sysdig indicate that threat actors are not only querying LLMs that account holders have already provided on such platforms, but also attempting to activate new ones. This could rapidly escalate costs for the victims.
“LLM-Jacking is on the rise,” warn security researchers in their report. In July 2024, they recorded a tenfold increase in LLM queries and a doubling of the number of unique IP addresses involved in these attacks. “As large language models continue to evolve, the costs for victims using premium models like Claude 3 Opus could nearly triple to over $100,000 per day.”
Sysdig has found evidence that attackers participating in LLM-Jacking are, in some cases, based in Russia, where access to LLM chatbots and services from Western companies is heavily restricted due to sanctions.
“The primary language used in the prompts is English (80 percent), with Korean being the second most common (10 percent), followed by Russian, Romanian, German, Spanish, and Japanese,” the research report states.
Amazon Bedrock is an AWS service that enables organizations to easily deploy and use LLMs from multiple AI companies, supplement them with their own datasets, and build agents and applications around them. The service supports a long list of API actions through which models can be managed and interacted with programmatically.
The most common API actions exploited by attackers this year via compromised credentials include InvokeModel, InvokeModelStream, Converse, and ConverseStream. Recently, attackers have also been observed using PutFoundationModelEntitlement and PutUseCaseForModelAccess. These, along with ListFoundationModels and GetFoundationModelAvailability, are used to preactivate models, allowing attackers to determine which models an account has access to.
This means that even organizations that have deployed Bedrock but have not activated certain models are not safe. The cost differences between different models can be significant. For example, researchers estimated potential costs of over $46,000 per day for using a Claude-2.x model, while costs for models like Claude 3 Opus could be two to three times higher.
Researchers have found that attackers are using Claude 3 to generate and enhance the script code primarily designed to query the model. The script is designed to continuously interact with the model, generate responses, search for specific content, and store the results in text files.
“The deactivation of models in Bedrock and the need for activation should not be considered a security measure,” the researchers emphasized. “Attackers can and will activate them on your behalf to achieve their goals.”
An example is the Converse API, which was announced in May and provides users with a streamlined way to interact with Amazon Bedrock models. According to Sysdig, attackers began abusing the API within 30 days of its release. Converse API actions do not automatically appear in CloudTrail logs, unlike InvokeModel actions.
Even if logging is enabled, savvy attackers try to disable it by calling DeleteModelInvocationLoggingConfiguration, which disables call logging for CloudWatch and S3. In other cases, they check the logging status and avoid using stolen credentials to conceal their activities.
Attackers do not often call Amazon Bedrock models directly, but instead use third-party services and tools. This is the case with SillyTavern, a front-end application for interacting with LLMs. Users must provide their own login credentials for an LLM service of their choice or a proxy service.
“As this can be costly, an entire criminal ecosystem has developed around access to LLMs,” the researchers noted. “Credentials are obtained in many ways, including through payment, free trials, and theft. Since this access is a valuable commodity, reverse proxy servers are used to securely hold and control the credentials.”
Companies should take steps to ensure that their AWS credentials and tokens do not leak into code repositories, configuration files, and other places. They should also apply the principles of least privilege by restricting tokens to the task for which they were created.
“Continuously evaluate your cloud using best practice controls, such as the AWS Standard for basic security best practices,” the Sysdig researchers recommend. “Monitor your cloud for potentially compromised credentials, unusual activities, unexpected LLM usage, and indicators of active AI threats.”
If you want to stay informed on important cybersecurity topics, our free newsletter delivers everything you need to know.