Salt Security has unveiled its latest report, “1H 2026 State of AI and API Security: Navigating the Agentic Era,” which highlights a troubling discrepancy between the rapid deployment of AI agents and the corresponding security measures needed to manage them effectively. As organizations increasingly incorporate autonomous AI agents into their operations, the report reveals that a staggering 92% of these organizations fall short in the advanced security maturity necessary to safeguard these new environments.
As AI technology advances, its reliance on Application Programming Interfaces (APIs) has become increasingly paramount. APIs serve as the execution layer for AI systems, facilitating every action taken by agents, large language models (LLMs), and Model Context Protocol (MCP) servers. This heightened reliance has led to an explosion in API usage; the report indicates that two-thirds of organizations have experienced a growth of over 50% in API deployments over the past year. This rapid evolution underscores the critical role that APIs play in the functionality of AI systems.
However, as the adoption of AI-driven automation accelerates, security measures are failing to keep pace, leading to what Salt Security refers to as the “Agentic Security Gap.” In the complex ecosystems of modern AI environments, security must now focus on visibility and control across the entire agentic AI stack, rather than isolating efforts to individual APIs. Roey Eliyahu, Co-Founder and CEO of Salt Security, emphasizes that securing AI agents necessitates a comprehensive approach, stating, “You cannot secure AI agents without securing every layer they touch, including the APIs they call, the MCP servers they route through, and the data they access. Risk in the agentic era doesn’t sit in one place. It lives in how all of those pieces interact in real time.”
The findings presented in the report are based on a survey of 327 security leaders and illustrate a significant gap between the speed of AI adoption and the development of robust security measures. Alarmingly, nearly half (47%) of organizations admitted to delaying production releases due to concerns over API security. Further compounding the issue, around one-third (32%) reported experiencing at least one API security incident in the previous year. Despite the prevalence of these risks, only a mere 8% of organizations have achieved an advanced state of API security maturity, leaving the vast majority inadequately prepared for the challenges ahead.
In light of these findings, a noteworthy shift in the scrutiny of AI security risks by boards and executive teams is evident. The report indicates that 79% of such leaders have heightened their focus on these risks, yet only 18% express extreme confidence in their ability to detect attacks that employ generative AI. This disparity highlights a pressing issue: the inadequacy of legacy security tools to defend against threats in contemporary agentic environments.
A critical factor contributing to this confidence lag is a glaring lack of visibility, a persistent vulnerability for many organizations. The survey reveals that fewer than one in four (24%) organizations maintain a fully automated API inventory, while most still depend on partial or manual methods for tracking. Furthermore, nearly 90% of organizations are either currently utilizing or planning to adopt generative AI in their API development processes. This trend introduces a new spectrum of evolving security risks that must be addressed throughout the software lifecycle.
As the threat landscape evolves, attackers are increasingly exploiting trusted systems, which, according to Salt Labs, leads to attack attempts that are far more insidious. The data shows that 99% of analyzed attack attempts originate from authenticated sources. This includes attacks by rogue agents operating with legitimate credentials but lacking adequate human oversight, rate limiting, or behavioral guardrails. Of grave concern is the fact that approximately two-thirds (65%) of attacks target security misconfigurations (identified as OWASP API8 vulnerabilities), a situation that becomes even more dangerous when over-permissioned APIs are linked to AI agents.
The report posits that API security is no longer merely a subset of application or cloud security; rather, it has emerged as a foundational discipline in its own right. With APIs now accounting for a significant portion of web traffic and serving as the backbone for all AI agent activities, they represent a critical attack surface that traditional security measures were not designed to protect effectively.
In response to these challenges, Salt Security is promoting a innovative security model termed the “Agentic Security Graph.” This framework aims to map the interconnections between LLMs, MCP servers, and APIs. Together, these elements compose the agentic stack, providing the essential context to comprehend not just what AI systems generate but also how they interact across enterprise environments.
Roey Eliyahu elaborates on the company’s vision, stating, “Salt Security was founded on the belief that APIs are the most critical and most overlooked attack surface in the enterprise. As AI agents have emerged, it has become clear that APIs are just one pillar in a much larger, deeply connected system.” The urgency of addressing this issue is underscored in the report, which highlights that the problems associated with unsecured APIs are not hypothetical; they are present and demand immediate attention, particularly as many organizations remain unprepared to tackle them effectively.
As organizations continue to navigate this complex landscape, the emphasis remains on improving security protocols surrounding APIs and AI, ensuring that they can effectively mitigate risks in a rapidly evolving digital world. The necessity for robust security frameworks is more critical than ever as enterprise reliance on AI intensifies, thereby necessitating proactive measures to safeguard against potential vulnerabilities.

