The latest research from Orca Security has revealed concerning security shortcomings in the deployment of AI models and tools by organizations. Despite the growing usage of AI in various industries, many organizations are failing to implement basic security measures, leaving their models vulnerable to potential attacks.
According to the “2024 State of AI Security Report” published by Orca Security, a significant number of organizations are not adequately securing their AI tools. One major issue highlighted in the report is the failure to disable risky default settings that could potentially allow attackers to gain root access, exploit vulnerabilities in packages, or expose sensitive code unknowingly.
This report echoes similar concerns raised by other security experts, such as Veracode, who have warned about the security risks associated with the rapid adoption of AI technology. The failure to prioritize security in the development and deployment of AI models poses serious risks to organizations and their data.
One alarming finding from the report is that 56% of organizations deploy their own AI models for collaboration and automation, yet many of these software packages contain at least one Common Vulnerabilities and Exposures (CVE). While most of these vulnerabilities are categorized as low to medium risk at present, the potential for exploitation remains a significant concern.
Insecure configurations and controls were identified as a major security challenge in the deployment of AI tools. For instance, the report found that Azure OpenAI, a commonly used AI service, was not configured with private endpoints by 27% of organizations, potentially exposing data to attackers. Additionally, default settings for platforms like Amazon SageMaker were found to favor development speed over security, leaving organizations vulnerable to unauthorized access.
Encryption protection was also highlighted as a critical issue, with many organizations failing to enable encryption for their self-managed keys. This lack of encryption leaves sensitive data at risk of exposure and manipulation by malicious actors, increasing the likelihood of data breaches and other security incidents.
Furthermore, the report pointed out security risks associated with popular AI platforms like OpenAI and Hugging Face, where exposed access keys were identified as a vulnerability. Researchers have demonstrated how these vulnerabilities can be exploited to gain access to sensitive data, underscoring the importance of implementing robust security measures in AI deployments.
Orca Security’s CEO, Gil Geron, emphasized the need for organizations to prioritize security in AI adoption and implement clear policies and boundaries to mitigate risks. He urged security practitioners to take proactive measures, such as checking default settings, limiting permissions, and practicing good network hygiene to protect AI projects and tools from potential threats.
In conclusion, the report serves as a wake-up call for organizations to address the security risks associated with AI deployment and take proactive steps to safeguard their models and data from potential attacks. With the pace of AI development accelerating, it is crucial for organizations to prioritize security and ensure that proper measures are in place to protect their AI assets.

