Cybersecurity researchers from third-party risk management firm UpGuard have raised concerns about a vulnerability related to exposed Ollama APIs, which are used to access running AI models. These exposed APIs not only present security risks for model owners but also provide valuable insights into the adoption rates and geographic distribution of specific AI models, such as DeepSeek.
Ollama serves as an AI model framework that simplifies the interaction with models by offering a user-friendly interface for selecting and downloading models. However, according to UpGuard’s research shared with Hackread.com, the API can be exposed to the public internet, potentially putting data at risk. The functions to push, pull, and delete models can also be exploited by unauthenticated users, leading to potential costs for cloud computing resource owners. Additionally, existing vulnerabilities within Ollama could be leveraged by malicious actors.
UpGuard’s research has already detected instances of the vulnerability being exploited, with targeted IPs being tampered with. The researchers have expressed concerns that hobbyists, small businesses, universities, and home internet connections using Ollama APIs are at risk of compromise, as these systems could be easily incorporated into botnets for future attacks.
Further analysis by UpGuard focused on the distribution of DeepSeek models across different parameter sizes running on exposed Ollama APIs. The research found that models with 14 billion and 7 billion parameters were the most commonly observed among exposed DeepSeek models, indicating a preference for mid-range models by users.
The number of IP addresses exposing Ollama APIs has increased by over 70% in the past three months, with approximately seven thousand IPs currently exposed. Out of these IPs, 700 are running some version of DeepSeek, with a significant portion utilizing models from the deepseek-v2 and deepseek-r1 families.
Geographically, the highest concentration of IPs running DeepSeek models was found in China, followed by the U.S. and Germany. The analysis of DeepSeek models within the U.S. by Internet Service Provider revealed a diverse range of providers hosting these models, from major entities like Google LLC to smaller ISPs and universities.
Notably, DeepSeek has faced restrictions from entities like the U.S. Navy, the state of Texas, NASA, and Italy due to concerns about potential data leakage to the Chinese government. While these restrictions may not apply to open-source models, UpGuard researchers highlighted the need for users to scrutinize the code and be aware of potential risks within the models themselves to prevent AI data leakage.
In conclusion, auditing one’s attack surface for exposed Ollama APIs and staying informed about critical models or AI products is crucial for mitigating the risks of AI data leakage and ensuring effective third-party risk management. The rapid growth of exposed Ollama APIs and the widespread adoption of DeepSeek underscore the importance of addressing these vulnerabilities and protecting AI models from potential threats in the future.
