CyberSecurity SEE

Employees Enter Sensitive Data Into GenAI Prompts Frequently

Employees Enter Sensitive Data Into GenAI Prompts Frequently

A recent study conducted by researchers at Harmonic has shed light on the potential risks associated with the use of generative AI (GenAI) tools in organizations. According to the study, employees are sharing a wide range of sensitive data through these platforms, raising concerns about data security and privacy.

The researchers analyzed thousands of prompts submitted by users into popular GenAI platforms such as Microsoft Copilot, OpenAI ChatGPT, Google Gemini, Anthropic’s Clause, and Perplexity. They found that while many employee requests were relatively straightforward, a subset of prompts included sensitive data, accounting for 8.5% of the total analyzed prompts.

Customer data emerged as the most frequently leaked sensitive information, comprising 45.77% of the analyzed prompts. For instance, employees were found to submit insurance claims containing customer information into GenAI platforms for faster processing. While this may improve efficiency, it also poses a significant risk of exposing sensitive customer data such as payment transactions, credit card information, and customer authentication details.

Employee data accounted for 27% of sensitive prompts in the study, indicating an increasing use of GenAI tools for internal processes such as performance reviews and hiring decisions. Legal and finance information, though less commonly exposed at 14.88%, can lead to severe corporate risks when leaked. Security information and code constituted smaller portions of the leaked data, but were noted to be the fastest-growing and most concerning categories.

Despite the potential risks associated with using GenAI, experts believe that organizations may not have a choice but to adopt these tools to remain competitive. Stephen Kowski, CTO at SlashNext Email Security+, emphasized the importance of leveraging AI for efficiency, productivity, and innovation in business operations.

However, some experts caution against adopting AI technologies without a clear purpose or need. Kris Bondi, CEO of Mimoto, warned that using AI for the sake of it could lead to failure if it does not align with the organization’s goals and priorities.

To address the risks associated with GenAI, the researchers at Harmonic recommended implementing effective AI governance practices. This includes tracking input into GenAI tools in real-time, ensuring employees are using appropriate paid plans, classifying sensitive data, creating workflows, and providing training on responsible GenAI use.

While the debate over the risks and rewards of using GenAI continues, it is clear that organizations must carefully consider the implications of adopting these technologies and take necessary steps to protect sensitive data and maintain data privacy. Failure to do so could result in significant vulnerabilities and potential data breaches that could harm the organization’s reputation and bottom line.

Source link

Exit mobile version