HomeCyber BalkansThe Role of Agentic AI in Amplifying and Creating Insider Risks

The Role of Agentic AI in Amplifying and Creating Insider Risks

Published on

spot_img

The Rise of Agentic AI: Redefining Insider Risks in Organizations

In today’s rapidly evolving technological landscape, the emergence of agentic AI has not only amplified existing insider risks but has also become a source of insider threat itself. As organizations navigate the aftermath of the AI revolution, it is imperative to rethink and update insider risk management programs to account for these new AI-driven entities.

Recent data from a report by Cybersecurity Insiders reveals that a staggering 90% of organizations encountered insider threat incidents within the last year. The Ponemon Institute elaborates that the vast majority—nearly three-quarters—of these insider threats stem from nonmalicious activities. This includes negligence or error (53%) and compromised users due to manipulation or password theft (20%), while only 27% of incidents were rooted in malicious intent.

The advent of generative and agentic AI is poised to exacerbate these challenges, and IT professionals are taking notice. A notable 94% of respondents in the Cybersecurity Insiders report expressed concern about how AI technologies would significantly increase their exposure to insider risks.

At the forefront of discussions regarding AI and identity management were two pivotal sessions held at the RSA Conference 2026. Thought leaders shared critical insights about the mounting challenges and risks that AI introduces into the enterprise environment.

Amplifying Human Insider Risk

One of the pressing issues highlighted is the phenomenon of Shadow AI—the unauthorized use of AI applications within organizations without explicit approval, oversight, or monitoring. According to a report from Netskope, an alarming 47% of employees admit to using their personal generative AI accounts for work-related tasks. The reasons for this trend vary, including comfort with familiar applications, the absence of sanctioned enterprise-grade tools, a desire to enhance productivity, and the general accessibility of consumer-grade AI tools.

Rob Juncker, the Chief Product Officer at Mimecast, remarked on the infiltration of unsanctioned AI usage within organizations, stating, "Ninety-eight percent of us in this room, myself included, have unsanctioned AI inside our organizations." This lack of oversight doesn’t merely create challenges; it raises significant risks related to data loss, security breaches, and regulatory compliance violations. Without the guidance of IT and cybersecurity teams, these ungoverned tools can produce erroneous outputs—often referred to as AI hallucinations—that could adversely affect corporate initiatives.

Compounding these difficulties is the risk of AI data leakage. AI models are heavily reliant on the data they receive, and employees often unwittingly provide sensitive company information to these tools. Shockingly, a report from Harmonic Security revealed that 4.37% of prompts and 22% of files uploaded to generative AI tools contained confidential company data, such as source code and employee credentials. Juncker illustrated this alarming trend by noting that if an organization has 100 users sending an average of 20 prompts daily, this could end up exposing sensitive data to the external world.

A further concern is the growing sophistication of phishing attacks, which have evolved thanks to AI’s capability to generate realistic and flawlessly composed emails. This enables scammers to circumvent the traditional red flags commonly associated with phishing, rendering employees more vulnerable. For instance, Ira Winkler, a field CISO at Aisle, pointed out the ease with which attackers can manipulate targets through AI-generated communications that mimic legitimate sources.

New Risks Emanating from AI Agents

Beyond exacerbating human insider threats, agentic AI is posing new challenges as these AI entities themselves can function as threats. Attackers now view AI agents as privileged insiders, vulnerable to manipulation. A striking example involved a cybercriminal using a "prompt injection" technique to manipulate an AI-enabled security tool, attempting to extract sensitive company data. Juncker described this incident, highlighting the sophistication with which attackers are now operating.

Additionally, AI agents are essentially human proxies, capable of acting in ways that could jeopardize enterprise security. Juncker shared a cautionary tale about a company that automated its marketing efforts by granting AI agents unfettered access to sensitive information like customer data and internal communications. The result was catastrophic, with the AI agent forwarding confidential information indiscriminately, ultimately leading to a breach.

A further illustration involved an employee who inadvertently trained an AI agent with their credentials. This agent, once given access, explored the entire organization’s data repository. Even after the employee left the organization, the agent continued its operations unabated due to insufficient security protocols—highlighting the necessity for vigilance in monitoring AI activities.

Mitigating AI-Exacerbated Insider Threats

To counteract the new insider threats posed by agentic AI, organizations must adopt a flexible and proactive approach. Juncker emphasized, “AI is becoming the ultimate insider in our organizations," and organizations need to rethink their management strategies regarding these tools.

First and foremost, establishing clear AI acceptable-use and security policies becomes paramount in defining acceptable behaviors surrounding AI tool usage. Awareness efforts should ensure that employees acknowledge these policies, given that a staggering 81.5% of workers remain oblivious to their organization’s AI policy.

Furthermore, implementing rigorous checks and balances can play a pivotal role in thwarting potential threats. This includes verifying that substantial financial transactions undergo the necessary approvals before execution, regardless of the apparent legitimacy of the communication. Regular audits and performance checks on AI agents can also be instrumental in preventing unauthorized actions.

Education is equally vital. Employees should be trained on the distinct risks posed by AI in the context of social engineering and phishing scams, including strategies for identifying deepfake and vishing attacks. Prompt reporting of suspicious communications can also mitigate risks associated with AI-generated fraud.

Conclusion: A Continued Struggle

The realm of cybersecurity has always involved an ongoing battle between threats and protective measures. The rise of AI raises the stakes and introduces complex challenges, particularly regarding insider risks and identity management.

To combat the new identity threats posed by generative and agentic AI, organizations must embrace responsible and secure AI practices. By instituting comprehensive policies, fostering employee training, conducting advanced monitoring of both human and artificial agents, and deploying robust security technologies, AI can evolve from a potential threat into a valuable asset—enhancing productivity while safeguarding sensitive data.

In the ever-evolving landscape of cybersecurity, proactive measures and forward-thinking strategies are essential as organizations navigate the intricate relationship between human and AI-driven insider risks.

Source link

Latest articles

Five Steps to Enhance Supply Chain Security and Boost Cyber Resilience

Title: Addressing the Rising Threat of Supply Chain Attacks: Essential Strategies for Organizations In recent...

New eSentire CEO Drives AI-Enhanced Managed Security Transformation

James Foster Advocates for Innovations in Cybersecurity Through AI-Driven Solutions In a transformative shift within...

GrafanaGhost Exploit Eases Past AI Guardrails for Silent Data Exfiltration

A recently discovered critical vulnerability, referred to as GrafanaGhost, has raised alarms in cybersecurity...

US Critical Infrastructure Vulnerable to Iranian-Linked OT Threats

CISA Reports Iranian-Linked Groups Targeting U.S. Critical Infrastructure The Cybersecurity and Infrastructure Security Agency (CISA)...

More like this

Five Steps to Enhance Supply Chain Security and Boost Cyber Resilience

Title: Addressing the Rising Threat of Supply Chain Attacks: Essential Strategies for Organizations In recent...

New eSentire CEO Drives AI-Enhanced Managed Security Transformation

James Foster Advocates for Innovations in Cybersecurity Through AI-Driven Solutions In a transformative shift within...

GrafanaGhost Exploit Eases Past AI Guardrails for Silent Data Exfiltration

A recently discovered critical vulnerability, referred to as GrafanaGhost, has raised alarms in cybersecurity...