Search for an article

Select a plan

Choose a plan from below, subscribe, and get access to our exclusive articles!

Monthly plan

$
13
$
0
billed monthly

Yearly plan

$
100
$
0
billed yearly

All plans include

  • Donec sagittis elementum
  • Cras tempor massa
  • Mauris eget nulla ut
  • Maecenas nec mollis
  • Donec feugiat rhoncus
  • Sed tristique laoreet
  • Fusce luctus quis urna
  • In eu nulla vehicula
  • Duis eu luctus metus
  • Maecenas consectetur
  • Vivamus mauris purus
  • Aenean neque ipsum
Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

HomeCyber BalkansAlmost 10% of employee-generated AI prompts contain sensitive data

Almost 10% of employee-generated AI prompts contain sensitive data

Published on

spot_img

In the realm of enterprise AI usage, there are three main categories to consider: sanctioned deployments, shadow AI, and semi-shadow gen AI. Sanctioned deployments refer to implementations that are officially authorized either through licensing agreements or in-house development. On the other hand, shadow AI consists of consumer-grade apps that are prohibited by the enterprise for legitimate reasons. Lastly, semi-shadow gen AI presents a new challenge as it falls somewhere in between the two extremes.

Unauthorized shadow AI has become a major concern for Chief Information Security Officers (CISOs), as it poses a significant threat to company data security. However, the emergence of semi-shadow AI introduces a more complex issue that can be difficult to manage. This type of AI usage is often initiated by business unit leaders who may enlist paid gen AI apps without IT approval for purposes such as experimentation, expediency, or productivity enhancement. While the executive may be engaging in shadow IT by utilizing these tools, line-of-business employees may not be aware of the potential risks, as they are following management directives as part of the company’s AI strategy.

Whether classified as shadow or semi-shadow AI, free generative AI apps present the greatest challenge due to their license terms that typically allow for unlimited training on queries. Research conducted by Harmonic has revealed that the majority of sensitive data leakage occurs through the use of free-tier AI applications. For example, over half of sensitive requests were entered on the free tier of ChatGPT, highlighting the potential risks associated with unrestricted use of such tools.

As organizations continue to adopt AI technologies to drive innovation and efficiency, it is crucial for IT departments and security teams to closely monitor and regulate the usage of AI applications within the enterprise. Implementing strict approval processes and enforcing compliance with data security protocols can help mitigate the risks associated with unauthorized and potentially harmful AI deployments. Additionally, providing employees with proper training and education on the use of AI tools can help raise awareness of potential security threats and promote responsible AI usage within the organization.

Source link

Latest articles

Man-in-the-Middle Vulnerabilities Present New Research Opportunities in Car Security

Two researchers have announced their intention to delve into the world of vehicle cybersecurity...

Over $1M stolen for Bar Harbor school construction project

BAR HARBOR, Maine - Following a devastating cyber crime, the Mount Desert Island Regional...

Microsoft Discovers Fresh XCSSET MacOS Malware Version Aiming at Xcode Projects

A new variant of the XCSSET macOS malware has been discovered by Microsoft Threat...

Microsoft Rewards Hackers with $16.6 Million Despite Ongoing Windows Zero Days

Microsoft's bug bounty program has been in operation since 2013, aiming to secure products...

More like this

Man-in-the-Middle Vulnerabilities Present New Research Opportunities in Car Security

Two researchers have announced their intention to delve into the world of vehicle cybersecurity...

Over $1M stolen for Bar Harbor school construction project

BAR HARBOR, Maine - Following a devastating cyber crime, the Mount Desert Island Regional...

Microsoft Discovers Fresh XCSSET MacOS Malware Version Aiming at Xcode Projects

A new variant of the XCSSET macOS malware has been discovered by Microsoft Threat...