HomeRisk ManagementsGovernance Gaps Surface with 76% Rise in NHIs Driven by AI Agents

Governance Gaps Surface with 76% Rise in NHIs Driven by AI Agents

Published on

spot_img

The SANS Institute has issued urgent warnings about the growing challenges surrounding the incorporation of artificial intelligence (AI) into enterprise workflows. According to the organization, this rapid integration poses significant security threats that could far exceed current protective measures. This cautionary note emerged from the recent publication of the 2026 SANS State of Identity Threats & Defenses Survey, which aggregates insights from interviews with over 500 security professionals globally.

One of the most striking revelations from the survey is that 76% of organizations are witnessing an expansion in non-human identities (NHIs). These NHIs encompass various entities such as service accounts, API keys, automation bots, and workload identities. The influx of NHIs is being significantly influenced by the rise of agentic AI, with 74% of organizations already deploying AI agents or automated processes that necessitate credentials for functionality. The sheer volume of these NHIs within enterprises has reportedly doubled or even tripled, raising alarms about the potential vulnerabilities they introduce.

Agentic AI, identified as a particularly pressing concern, represents a new category of security risk that many organizations are ill-equipped to manage. These AI agents, which require credentials and access permissions to function autonomously, typically receive privileged access to directly interact with vital infrastructures and sensitive data. Unlike traditional NHIs that operate based on fixed protocols, agentic AI can interpret commands in a more fluid manner, often resulting in unpredictable actions similar to those of an insider with excessive privileges. This unpredictability is compounded by the risk of "hallucination," where the AI may produce outputs that are erroneous or misleading.

Insights from Forrester also amplify the sense of urgency in addressing these risks. The firm has warned that the implementation of agentic AI could very likely lead to a publicly disclosed data breach by the end of 2026 unless organizations adopt a "minimum viable security" framework to mitigate the inherent dangers.

Inadequate AI Governance

The SANS Institute’s study has further highlighted significant shortcomings in how organizations manage the security implications of AI integration. A staggering 92% of the entities surveyed do not engage in scheduled rotation of machine credentials every 90 days, primarily out of concern that such actions could disrupt service accounts. Alarmingly, 59% of organizations update fewer than half of their NHI credentials on a quarterly basis, and 15% lack any awareness of their credential rotation practices. To exacerbate these issues, 5% of respondents admitted they were uncertain whether their organizations even employ agentic AI technology.

Another substantial hurdle surfaced in the report: many organizations depend on manual access reviews, ticket-based provisioning, and infrequent credential rotations. This reliance proves ineffective, particularly in environments characterized by a large number of NHIs operating at machine speed across various platforms, including DevOps, cloud services, and Software as a Service (SaaS) systems.

Richard Greene, a certified instructor at the SANS Institute, emphasized the need for organizations to establish robust governance frameworks to reign in the burgeoning power of AI in decision-making processes. He remarked, "We’ve already witnessed chaotic consequences when non-human identities proliferate without adequate oversight, and the rise of agentic AI is progressing even more rapidly." While he acknowledged encouraging signs—such as nearly 40% of organizations now employing human-in-the-loop approvals for AI agent actions—he cautioned that the challenge lies in maintaining regulatory measures as these AI systems shift from experimental stages to becoming integral components of daily operations.

In light of these findings, the SANS Institute recommends that organizations adopt practices such as the implementation of secrets management vaults, automated credential rotation, and an emphasis on least-privilege access. These strategies serve as essential safeguards against the risks associated with agentic AI, though it is critical that these efforts are scaled appropriately to keep pace with the continuous growth of NHIs.

As the landscape of enterprise technology evolves, the need for rigorous security measures becomes increasingly apparent. The convergence of AI and organizational workflows presents not only opportunities for enhanced efficiency but also distinct risks. The SANS Institute’s findings underscore the imperative for organizations to proactively address these vulnerabilities, establishing frameworks that foster a secure environment as they navigate the complexities accompanying the integration of intelligent automation.

Source link

Latest articles

Why Many Zero-Trust Architectures Struggle at the Traffic Layer

In recent discussions about cybersecurity, particularly within complex network environments, a critical aspect often...

CyberASAP Gains £10m Funding as UK’s Emerging Cyber Innovators Shine

CyberASAP Gears Up for a Decade of Innovation Amidst Fresh Funding and Promising Talent The...

Mythos and AI Tools Increase Cybersecurity Risks in Healthcare

Experts Warn of Faster and Higher Volume Attacks, Rising Patient Safety Worries Marianne Kolbasuk McGee...

Iran-Linked Hackers Expected to Persist

Cyber Warfare Continues Despite Ceasefire Between Iran, U.S., and Israel Tehran-aligned hackers have issued a...

More like this

Why Many Zero-Trust Architectures Struggle at the Traffic Layer

In recent discussions about cybersecurity, particularly within complex network environments, a critical aspect often...

CyberASAP Gains £10m Funding as UK’s Emerging Cyber Innovators Shine

CyberASAP Gears Up for a Decade of Innovation Amidst Fresh Funding and Promising Talent The...

Mythos and AI Tools Increase Cybersecurity Risks in Healthcare

Experts Warn of Faster and Higher Volume Attacks, Rising Patient Safety Worries Marianne Kolbasuk McGee...