The Growing Use of AI Agents for Security Tasks: A Double-Edged Sword
In a recent study by Semperis, a cybersecurity vendor, a clear trend emerged regarding the adoption of artificial intelligence (AI) agents in the realm of organizational security. An overwhelming 93% of global organizations currently utilize, or intend to implement, AI agents for crucial security functions such as password resets and VPN access. This adoption comes despite recognition of the potential for significant breaches and data leaks.
To gather insights for its report, titled State of Identity Security in the AI Era, Semperis conducted a survey of 1,100 organizations across eight countries, including the United States, the United Kingdom, France, Germany, Spain, Italy, Singapore, and Australia. The findings revealed a substantial reliance on AI technology for sensitive operations, with 92% of respondents indicating that AI is installed on at least some local systems capable of accessing SSH and encryption keys. This reality raises alarming questions about the vulnerability of these systems to external threats.
Furthermore, there is a growing concern within the industry regarding the implications of AI on identity security; a striking 74% of those surveyed believe that the introduction of AI will lead to an increase in attacks targeting identity infrastructure. This sentiment echoes broader conversations within cybersecurity circles about the evolving threats posed by cybercriminals and the methods they employ to exploit vulnerabilities.
In contrast to this trend towards automation and AI, respondent confidence in managing the associated risks remains low. Only 32% reported being “very confident” about their ability to regain control following an exposure of AI-driven credentials. Grace Cassy, a partner at Ten Eleven Ventures, remarked on this remarkable juxtaposition. “What is striking about the study is not just how quickly AI is being integrated into identity systems but how unprepared many organizations are to recover when things go wrong,” she noted.
Cassy emphasized the importance of integrating operational safeguards, observability, and recovery protocols when adopting AI at the identity layer. “It is a new dimension of an old question, really: are you resilient enough to respond in the event of critical disruption?” she asked. This indicates a critical gap in preparedness that organizations must address as they delve deeper into the adoption of AI technologies.
An Overabundance of Non-Human Identities
The report further elaborates on the complications arising from the escalating number of non-human identities (NHIs), including AI agents, which have made identity governance a challenging task for security teams. The proliferation of these digital agents leads to a concerning number of abandoned “zombie” agents and shadow NHIs, which are susceptible to being commandeered by threat actors. Many of these identities are granted excessive permissions, often equivalent to those of actual human users, raising further security concerns.
Alarmingly, only 65% of organizations actively register, authenticate, and authorize their AI identities within a formal governance framework. A staggering 6% of respondents reported not tracking these entities at all. Of the organizations that do maintain oversight, more than half—57%—utilize the same systems for managing both AI identities and those of human users. This approach may inadvertently exacerbate security vulnerabilities.
A Commitment to Best Practices in AI Identity Governance
On a positive note, the study indicates that AI identity governance has emerged as a top priority for 83% of global organizations within the next year. However, the specifics of the measures they plan to implement remain uncertain.
To help organizations navigate this complex environment, Semperis has released a series of recommendations:
- Separate Identity Roles: Treat AI agents as distinct NHIs instead of equating them with human identities.
- Implement Access Controls: Enforce a least-privilege model and apply just-in-time access to AI agents analogous to human identities.
- Establish Trust Boundaries: Where appropriate, segregate the trust frameworks for AI agents and human identities to minimize risk.
- Utilize Monitoring Tools: Implement user and entity behavior analytics (UEBAs) to identify suspicious agent activities or “zombie” behaviors.
- Prepare for Breaches: Ensure that organizations possess robust mechanisms for swiftly recovering identity systems to a secure state in the event of a breach.
As organizations progress into the AI era, the intersection of technology and security will require them to critically reassess their approach to identity management. The stakes are high, and the potential consequences of inaction are far-reaching. Moving forward, the focus must not only be on integrating AI for operational efficiencies but also on fortifying defenses and building resilience against a landscape rife with emerging threats.
