HomeCyber BalkansAlmost 10% of employee-generated AI prompts contain sensitive data

Almost 10% of employee-generated AI prompts contain sensitive data

Published on

spot_img

In the realm of enterprise AI usage, there are three main categories to consider: sanctioned deployments, shadow AI, and semi-shadow gen AI. Sanctioned deployments refer to implementations that are officially authorized either through licensing agreements or in-house development. On the other hand, shadow AI consists of consumer-grade apps that are prohibited by the enterprise for legitimate reasons. Lastly, semi-shadow gen AI presents a new challenge as it falls somewhere in between the two extremes.

Unauthorized shadow AI has become a major concern for Chief Information Security Officers (CISOs), as it poses a significant threat to company data security. However, the emergence of semi-shadow AI introduces a more complex issue that can be difficult to manage. This type of AI usage is often initiated by business unit leaders who may enlist paid gen AI apps without IT approval for purposes such as experimentation, expediency, or productivity enhancement. While the executive may be engaging in shadow IT by utilizing these tools, line-of-business employees may not be aware of the potential risks, as they are following management directives as part of the company’s AI strategy.

Whether classified as shadow or semi-shadow AI, free generative AI apps present the greatest challenge due to their license terms that typically allow for unlimited training on queries. Research conducted by Harmonic has revealed that the majority of sensitive data leakage occurs through the use of free-tier AI applications. For example, over half of sensitive requests were entered on the free tier of ChatGPT, highlighting the potential risks associated with unrestricted use of such tools.

As organizations continue to adopt AI technologies to drive innovation and efficiency, it is crucial for IT departments and security teams to closely monitor and regulate the usage of AI applications within the enterprise. Implementing strict approval processes and enforcing compliance with data security protocols can help mitigate the risks associated with unauthorized and potentially harmful AI deployments. Additionally, providing employees with proper training and education on the use of AI tools can help raise awareness of potential security threats and promote responsible AI usage within the organization.

Source link

Latest articles

Anubis Ransomware Now Hitting Android and Windows Devices

 A sophisticated new ransomware threat has emerged from the cybercriminal underground, presenting a...

Real Enough to Fool You: The Evolution of Deepfakes

Not long ago, deepfakes were digital curiosities – convincing to some, glitchy to...

What Happened and Why It Matters

In June 2025, Albania once again found itself under a digital siege—this time,...

Why IT Leaders Must Rethink Backup in the Age of Ransomware

 With IT outages and disruptions escalating, IT teams are shifting their focus beyond...

More like this

Anubis Ransomware Now Hitting Android and Windows Devices

 A sophisticated new ransomware threat has emerged from the cybercriminal underground, presenting a...

Real Enough to Fool You: The Evolution of Deepfakes

Not long ago, deepfakes were digital curiosities – convincing to some, glitchy to...

What Happened and Why It Matters

In June 2025, Albania once again found itself under a digital siege—this time,...