HomeCyber BalkansAlmost 10% of employee-generated AI prompts contain sensitive data

Almost 10% of employee-generated AI prompts contain sensitive data

Published on

spot_img

In the realm of enterprise AI usage, there are three main categories to consider: sanctioned deployments, shadow AI, and semi-shadow gen AI. Sanctioned deployments refer to implementations that are officially authorized either through licensing agreements or in-house development. On the other hand, shadow AI consists of consumer-grade apps that are prohibited by the enterprise for legitimate reasons. Lastly, semi-shadow gen AI presents a new challenge as it falls somewhere in between the two extremes.

Unauthorized shadow AI has become a major concern for Chief Information Security Officers (CISOs), as it poses a significant threat to company data security. However, the emergence of semi-shadow AI introduces a more complex issue that can be difficult to manage. This type of AI usage is often initiated by business unit leaders who may enlist paid gen AI apps without IT approval for purposes such as experimentation, expediency, or productivity enhancement. While the executive may be engaging in shadow IT by utilizing these tools, line-of-business employees may not be aware of the potential risks, as they are following management directives as part of the company’s AI strategy.

Whether classified as shadow or semi-shadow AI, free generative AI apps present the greatest challenge due to their license terms that typically allow for unlimited training on queries. Research conducted by Harmonic has revealed that the majority of sensitive data leakage occurs through the use of free-tier AI applications. For example, over half of sensitive requests were entered on the free tier of ChatGPT, highlighting the potential risks associated with unrestricted use of such tools.

As organizations continue to adopt AI technologies to drive innovation and efficiency, it is crucial for IT departments and security teams to closely monitor and regulate the usage of AI applications within the enterprise. Implementing strict approval processes and enforcing compliance with data security protocols can help mitigate the risks associated with unauthorized and potentially harmful AI deployments. Additionally, providing employees with proper training and education on the use of AI tools can help raise awareness of potential security threats and promote responsible AI usage within the organization.

Source link

Latest articles

Massachusetts Power Utility Hit by Volt Typhoon

The recent attack on the US electric grid by the Voltzite subgroup of the...

Hacker goldmine: over 110,000 iOS apps expose hardcoded secrets, research finds.

In a groundbreaking study conducted by Cybernews researchers, a glaring security flaw was exposed...

CYREBRO’s AI-Native MDR Platform Wins Silver at the 2025 Globee Cybersecurity Awards

CYREBRO, a leading AI-native Managed Detection and Response (MDR) solution, was recently awarded the...

The E-Voting System of ISACA London Chapter Faces Investigation

Members of the ISACA London Chapter have expressed their concerns regarding the e-voting system...

More like this

Massachusetts Power Utility Hit by Volt Typhoon

The recent attack on the US electric grid by the Voltzite subgroup of the...

Hacker goldmine: over 110,000 iOS apps expose hardcoded secrets, research finds.

In a groundbreaking study conducted by Cybernews researchers, a glaring security flaw was exposed...

CYREBRO’s AI-Native MDR Platform Wins Silver at the 2025 Globee Cybersecurity Awards

CYREBRO, a leading AI-native Managed Detection and Response (MDR) solution, was recently awarded the...