A recent survey conducted by the US National Cybersecurity Alliance (NCA) revealed that a concerning number of employees are sharing sensitive work information with AI chatbots without proper authorization. This trend is particularly prevalent among Gen Z and millennial workers, with 46% and 43%, respectively, admitting to sharing sensitive data compared to 26% and 14% of Gen X and baby boomers. The data security implications of this behavior are significant and underscore the need for increased awareness and training in organizations utilizing AI technologies.
The issue at hand is that many chatbots are designed to capture and store the information provided by users, including proprietary data, sensitive emails, customer information, and more. This data is then used to train the next generation of AI models, creating a cycle of potential data exposure and vulnerability. Companies like Samsung have experienced high-profile incidents where sensitive information was compromised due to improper use of AI chatbots, highlighting the real-world consequences of inadequate data security measures.
According to Lisa Plaggemier, executive director of NCA, the lack of training on safe AI use is a major contributing factor to the high percentage of employees sharing sensitive data with chatbots. The survey found that 52% of employed participants had not received any training on safe AI use, leaving them vulnerable to potential data breaches and security threats. This gap in knowledge has led to the rise of “shadow AI,” where employees use unapproved tools outside of the organization’s security framework, further increasing the risk of data exposure.
To address these challenges, organizations must prioritize training and establish clear guidelines around the use of AI tools to protect sensitive information. Implementing strict access controls, data masking techniques, regular audits, and AI monitoring tools can help mitigate risks and ensure compliance with data security regulations. Additionally, companies should consider segmenting off parts of the organization dealing with sensitive data and limiting the amount of data input into AI queries to reduce the risk of data exposure.
As AI capabilities continue to evolve and become integrated into various applications, including third-party software-as-a-solution (SaaS) platforms, it is crucial for organizations to stay vigilant and monitor changes in terms and conditions that may impact data security. Emerging technologies like SaaS Security Posture Management (SSPM) offer solutions to assess risks associated with AI usage and make necessary policy changes to protect sensitive information.
In conclusion, the growing use of AI chatbots in the workplace presents both opportunities for increased productivity and challenges related to data security. By implementing best practices, increasing training efforts, and staying informed about emerging technologies, enterprises can effectively manage the risks associated with AI tools and protect their data integrity and brand reputation.

