A recent study conducted by Harmonic Security has shed light on the potential security risks associated with the use of GenAI tools in organizations. The study, which analyzed tens of thousands of prompts from business users during Q4 of 2024, revealed that nearly one in ten prompts have the potential to disclose sensitive data.
While the majority of prompts were found to be innocuous, with employees asking GenAI tools to perform tasks such as summarizing text or editing code, a significant portion of prompts raised concerns about the inadvertent sharing of sensitive information. Specifically, 8.5% of prompts analyzed by Harmonic Security were found to contain potentially sensitive data.
Of these sensitive prompts, customer data was the most commonly disclosed type of information, accounting for 45.8% of the total. This included details such as billing information and authentication data. Employee information, including payroll data, personally identifiable information (PII), and employment records, was also frequently shared, comprising 26.8% of sensitive prompts. Some prompts even requested GenAI to conduct employee performance reviews.
Legal and finance data, such as sales pipeline information, investment portfolios, and merger and acquisition activity, accounted for 14.9% of sensitive prompts. Security-related information, including penetration test results, network configurations, and incident reports, was found in 6.9% of prompts. Finally, sensitive code, such as access keys and proprietary source code, accounted for the remaining 5.6% of potentially disclosed sensitive information.
One of the key concerns identified in the study was the use of free-tier GenAI services by employees, which often lack the robust security features found in enterprise versions. Many free-tier tools explicitly state that they train on customer data, raising the possibility that sensitive information entered into these tools could be used to improve models.
The study found that a significant percentage of users were utilizing the free tiers of GenAI services. Specifically, 63.8% of ChatGPT users, 58.6% of Gemini users, 75% of Claude users, and 50.5% of Perplexity users were found to be using the free tier. This raises concerns about the potential for data leakage and unauthorized access to sensitive information.
Alastair Paterson, CEO at Harmonic Security, emphasized the need for organizations to move beyond simply blocking sensitive prompts to effectively managing GenAI risks. While some organizations were able to mitigate data leakage by blocking requests or warning users about the potential risks, not all firms have the capability to do so. Paterson also highlighted the risks associated with free-tier GenAI services, noting that despite the best efforts of the companies behind these tools, there is still a risk of data disclosure.
In conclusion, the study conducted by Harmonic Security underscores the importance of vigilance and proactive risk management when using GenAI tools in organizations. By implementing robust security measures and raising awareness about the potential risks associated with these tools, organizations can better protect sensitive data and mitigate the threat of data leakage.
