A recent report has revealed that a significant portion of legal documents shared with AI tools are accessed through non-corporate accounts. Additionally, approximately half of all source code, R&D materials, and HR and employee records are being utilized by unauthorized AIs. This surge in data being inputted into AI tools has seen a nearly five-fold increase between March 2023 and March 2024, highlighting the rapid adoption of new AI technologies by end users.
The phenomenon of ‘shadow AI’ – where end users are embracing AI tools at a faster rate than IT departments can manage – is driving this exponential growth, according to the report. This trend raises concerns about the security and privacy of sensitive corporate information that is being accessed and used by unlicensed AI systems.
One of the key issues highlighted in the report is the lack of transparency regarding where the shared data ends up. Many users may not fully understand what happens to their company’s data once it is provided to unlicensed AI platforms. For example, ChatGPT’s terms of use state that the content entered by users belongs to them, but the platform reserves the right to utilize this content for various purposes such as service provision, maintenance, development, and improvement.
This means that AI platforms like ChatGPT could potentially train themselves using shared employee records, raising questions about the control and ownership of sensitive corporate data. While users do have the option to opt out of allowing ChatGPT to train on their data, there is a lack of clarity and transparency around how AI tools are utilizing this information.
The exponential growth of ‘shadow AI’ poses challenges for IT departments and organizations tasked with safeguarding and managing data privacy and security. With the rapid adoption of AI technologies by end users, there is a pressing need for enhanced governance and oversight to ensure that sensitive corporate information is protected from unauthorized access and use.
As organizations continue to leverage AI tools for various applications, they must establish robust data governance policies and protocols to address the risks associated with ‘shadow AI’. This includes implementing strict controls around data sharing, ensuring transparency in AI algorithms, and providing clear guidelines on how user data is utilized and protected.
In conclusion, the rise of ‘shadow AI’ underscores the importance of establishing a comprehensive data governance framework to mitigate security risks and protect sensitive corporate information. With the proliferation of AI technologies in the workplace, organizations must prioritize data privacy and security to safeguard against unauthorized access and use of valuable data assets.

