HomeCyber Balkans65% of Organizations Still Identify Unauthorised Shadow AI Despite Optimism About Visibility

65% of Organizations Still Identify Unauthorised Shadow AI Despite Optimism About Visibility

Published on

spot_img

Growing Disparity Between AI Control and Reality: CultureAI Study Highlights Concerns

A recent study conducted by CultureAI has unveiled a notable disparity between the practical usage of artificial intelligence (AI) within organizations and the perceptions of those organizations regarding how well they control this technology. This research raises alarming questions about the actual oversight companies have over AI, revealing a structural divide between perceived control and the operational reality on the ground.

According to the report, titled The State of Enterprise AI Usage: The Illusion of Control, which was executed by Censuswide, insights were gathered from 300 senior leaders in technology, security, and risk management across North America and Europe. The findings indicate that, while a significant 72% of organizations believe they have comprehensive visibility into their AI usage, an overwhelming 65% still detect unauthorized AI applications, commonly referred to as "shadow AI." This misalignment highlights that traditional controls may not be sufficient to monitor the increasing integration of AI tools, resulting in an illusion of control.

AI’s prevalence across various teams is unsurprising; the research found that 67% of security leaders confirmed its extensive use throughout their organizations, while 27% noted AI’s application in specific functions. Core functions, such as data analysis and revenue operations, are among the most notable areas where AI is utilized, with 72% and 59% respectively in software development and engineering, and 43% in customer support. Looking ahead, respondents are optimistic, with 91% anticipating a rise in AI usage over the next year and 41% expecting significant growth. However, this escalation may lead to heightened risks, as the rapid expansion often outpaces the establishment of appropriate controls and preparation strategies.

Despite the apparent confidence in visibility— as illustrated by the 72% of respondents claiming full understanding of AI usage—there remains a stark contrast with the reported finding that nearly 65% of organizations are grappling with unauthorized AI applications. This scenario illustrates that many tools, personal accounts, and integrated AI features remain hidden from conventional oversight and controls.

Most organizations display a robust belief in their governance frameworks, supported by formal policies, oversight committees, and structured guidelines. Nevertheless, the reality is that unauthorized AI usage persists, and detection capabilities are often inconsistent. This situation fosters an illusion of control: governance frameworks exist, but behavior frequently eludes them.

Leaders highlighted several high-stakes concerns related to AI usage, including exposure to compliance risks (56%), data leaks through prompts and uploads (52%), risks of credential compromise (40%), and the potential loss of intellectual property (39%). Despite these significant fears, nearly half of the respondents rated their AI risk as moderate or lower. While organizations recognize the risks associated with AI, they often struggle to escalate these concerns adequately, revealing a contradiction in their perceptions. This trend suggests that executives are not entirely disregarding AI risk; instead, they face challenges in quantifying these risks in an environment where potentially harmful incidents do not always manifest through clear breaches or alerts.

Most firms have implemented training, policies, and committees, yet lack effective mechanisms to address risks in real time at the moment these risks arise, such as during prompts, uploads, and while using embedded AI features within Software as a Service (SaaS) tools. Approximately 62% report that they have introduced a formal AI governance framework, while one-third are in the process of developing one. Additionally, over 67% have created AI or risk committees with defined oversight responsibilities. However, accompanying this confidence in governance is a recognition of clear operational deficiencies, with 20% of respondents admitting their policies are not enforced actively, and more than a third lacking specialized AI detection capabilities altogether.

Oliver Simonnet, the Lead Cybersecurity Researcher at CultureAI, commented on the findings, emphasizing that “Generative AI is now embedded across everyday workflows, often beyond traditional IT oversight.” Simonnet pointed out that many organizations believe they have governance frameworks that ensure control, yet the research substantiates the widening gap between perceived control and operational realities. The most significant AI risks anticipated in the near future are practical and high-probability threats associated with everyday use. He concluded by stating that while policies may articulate intent, the lack of real-time enforcement at critical points creates risks exacerbated by scale.

As organizations strive to incorporate AI responsibly and at scale, they must transcend mere policy-making and implement effective, real-time, enforceable controls at the points where risks actually originate.

Indeed, the urgency of addressing these operational gaps and enhancing oversight mechanisms cannot be overstated, given the potential repercussions of unauthorized AI usage in a rapidly evolving technological landscape.

Source link

Latest articles

Overly Permissive Guest Settings Expose Salesforce Customers to Risk

In a recent advisory, alarming concerns have surfaced regarding the security vulnerabilities prevalent in...

UNC6426 Exploits nx npm Supply-Chain Attack to Gain AWS Admin Access in 72 Hours

 A threat actor known as UNC6426 leveraged keys stolen following the supply chain compromise of...

AWS Expands Security Hub for Multicloud Security Operations

AWS Security Hub Reimagined As organizations grapple with the complexities of managing disparate security tools,...

More like this

Overly Permissive Guest Settings Expose Salesforce Customers to Risk

In a recent advisory, alarming concerns have surfaced regarding the security vulnerabilities prevalent in...

UNC6426 Exploits nx npm Supply-Chain Attack to Gain AWS Admin Access in 72 Hours

 A threat actor known as UNC6426 leveraged keys stolen following the supply chain compromise of...