HomeCII/OTAI Adoption Fueling Insider Data Leaks

AI Adoption Fueling Insider Data Leaks

Published on

spot_img

A recent report has shed light on the increasing use of Generative AI (GenAI) in enterprise settings and the associated security risks that come with it. The report highlights a 30-fold surge in the amount of data, including sensitive corporate information, being input into GenAI applications over the past year. This surge in data sharing emphasizes the urgent need for businesses to reassess their security strategies as AI-driven tools become more integrated into everyday workflows.

Enterprise users are now sharing sensitive data such as source code, regulated information, passwords, and intellectual property with GenAI applications. Compounding this issue is the fact that 72% of users are accessing these GenAI apps through personal accounts rather than company-managed platforms, leading to the emergence of “shadow AI” within organizations. This lack of oversight poses a significant governance challenge for security teams, as it hampers visibility into data sharing processes and creates potential vulnerabilities for cyber threats.

The report delves into the extensive use of AI in the workplace, revealing that 90% of organizations have adopted dedicated GenAI applications, and 98% are using software with AI-powered features. While only a small percentage of employees use standalone AI apps, a significant 75% interact with AI features embedded in other enterprise tools. This widespread AI integration poses a new challenge for security teams in the form of unintentional insider threats, where employees may unknowingly compromise proprietary information by sharing it with AI platforms.

One of the alarming findings of the report is the prevalence of shadow AI within organizations, where employees utilize personal accounts to interact with AI models, bypassing company controls over data processing and storage. This unregulated use of AI tools exposes businesses to data exfiltration and regulatory non-compliance risks. To mitigate these risks, organizations are adopting strict policies, including blocking unapproved AI applications and implementing Data Loss Prevention (DLP) solutions, user coaching, and access controls to limit data exposure.

The report identifies two primary ways in which sensitive enterprise data is exposed to GenAI applications: through summarization requests and content generation. Employees use AI tools to condense large documents and generate text, images, and videos, inadvertently risking the exposure of confidential information to external AI systems.

As early adoption of AI tools continues to rise, organizations face a constantly evolving security landscape. The report underscores the importance of implementing proactive measures to address the risks associated with sharing sensitive data with unvetted platforms. Many organizations are choosing to block unapproved AI applications preemptively while allowing only approved services, thus reducing the risk of data exposure.

A notable trend highlighted in the report is the increasing deployment of GenAI infrastructure within organizations, with a significant shift towards local AI models. While deploying AI models locally helps reduce dependence on third-party providers, it also introduces new security challenges such as vulnerabilities in the supply chain and data leakage risks. To tackle these issues, organizations are advised to strengthen their security posture by adhering to established frameworks like the OWASP Top 10, NIST AI Risk Management Framework, and MITRE ATLAS framework for AI threat assessment.

Chief Information Security Officers (CISOs) are increasingly turning to existing security tools to combat evolving AI-driven cyber threats. Organizations are advised to assess AI usage, implement strong AI controls, and strengthen local AI security measures to safeguard sensitive data in an AI-powered environment. The report emphasizes the importance of continuous monitoring, proactive risk mitigation strategies, and robust security policies to protect enterprise data in the era of AI-driven workflows.

Source link

Latest articles

WhatsApp Malware Campaign Employs Malicious VBS Files for Persistent Access

Emerging Cyber Threats: The Evolving Tactics of Malware Distribution In the ever-evolving landscape of cybersecurity...

CultureAI Launches on Microsoft Marketplace to Speed Up Secure AI Adoption

CultureAI Launches on Microsoft Marketplace to Transform Enterprise AI Governance This week, CultureAI made a...

Nearly 80% of UK Manufacturers Affected by Cyber Incidents in a Year

The ramifications of cyber-attacks on UK manufacturers have been starkly illustrated by recent findings...

More like this

WhatsApp Malware Campaign Employs Malicious VBS Files for Persistent Access

Emerging Cyber Threats: The Evolving Tactics of Malware Distribution In the ever-evolving landscape of cybersecurity...

CultureAI Launches on Microsoft Marketplace to Speed Up Secure AI Adoption

CultureAI Launches on Microsoft Marketplace to Transform Enterprise AI Governance This week, CultureAI made a...