HomeCII/OTOrganizations are Taking Steps to Address GenAI Risks

Organizations are Taking Steps to Address GenAI Risks

Published on

spot_img

Enterprise security teams are finally starting to catch up with the rapid adoption of AI-enabled applications in their organizations, thanks to the public release of ChatGPT 18 months ago. A recent analysis conducted by Netskope on anonymized AI app usage data from customer environments revealed that more organizations have begun implementing blocking controls, data loss prevention (DLP) tools, live coaching, and other mechanisms to mitigate risks associated with the use of AI apps.

The majority of controls that organizations have adopted or are in the process of implementing are focused on protecting against users sending sensitive data such as personal identity information, credentials, trade secrets, and regulated data to AI apps and services. Netskope’s analysis showed that 77% of organizations with AI apps are now using block/allow policies to restrict the use of at least one, and often multiple, GenAI apps to reduce risk. This number is significantly higher than the 53% reported in Netskope’s previous study. Additionally, half of the organizations are blocking more than two apps, with some blocking up to 15 GenAI apps due to security concerns.

The most blocked GenAI applications identified by Netskope include popular tools such as presentation maker Beautiful.ai, writing app Writesonic, image generator Craiyon, and meeting transcript generator Tactiq. The increase in the use of DLP tools to control what users can submit to GenAI tools has also seen a significant uptick, with 42% of organizations now utilizing these tools compared to 24% previously. Furthermore, live coaching controls, which provide warnings to users interacting with AI apps in a risky manner, have increased in popularity, with 31% of organizations implementing such policies, up from 20% last year.

Jenko Hwong, a cloud security researcher with Netskope Threat Labs, noted that 19% of organizations are using GenAI apps without blocking them, potentially indicating shadow IT use. However, Hwong emphasized the importance of implementing necessary risk mitigation measures when using GenAI applications to avoid security breaches.

While current efforts are focused on mitigating risks related to data sent to AI apps, there is less immediate attention on addressing the data received from GenAI services. While most organizations have acceptable use policies in place, few have mechanisms to address security and legal risks tied to erroneous or biased data generated by AI tools. Hwong suggested ways to mitigate these risks, including vendor contracts, enforcing corporate-approved GenAI apps with quality datasets, and logging all return datasets for auditing purposes.

The increased focus on GenAI apps by security teams aligns with the rapid adoption of AI tools across organizations for various use cases such as coding assistance, writing support, presentation creation, and image generation. Netskope’s findings revealed a threefold increase in the average number of GenAI apps used by organizations compared to last year, with ChatGPT being the most popular app followed by Grammarly, Microsoft Copilot, Google Gemini, and Perplexity AI.

As organizations continue to leverage GenAI apps, it is crucial for security teams to proactively manage risks by controlling data sent to these apps, reviewing policies, and staying ahead of the evolving landscape of AI technology. By taking these proactive measures, organizations can better protect themselves from potential security threats and ensure the safe and effective use of AI-enabled applications.

Source link

Latest articles

Study Reveals 87% of Organizations Vulnerable to Attacks from Known Issues

The recently released 2026 State of DevSecOps report has illuminated a pressing issue in...

Europol Targets The Com’s Ransomware and Extortion Networks

Global Law Enforcement Initiative Targets Decentralized Cyber Criminal Collective Law enforcement agencies spanning 28 countries...

Olympique de Marseille Cyberattack – CyberMaterial

Olympique de Marseille Thwarts Cyberattack During Training Break The Olympique de Marseille football club has...

Meta Strengthens Control Over Scam Advertisers

Meta Intensifies Legal Pursuits Against Scam Advertisers Globally Meta Platforms, the parent company of social...

More like this

Study Reveals 87% of Organizations Vulnerable to Attacks from Known Issues

The recently released 2026 State of DevSecOps report has illuminated a pressing issue in...

Europol Targets The Com’s Ransomware and Extortion Networks

Global Law Enforcement Initiative Targets Decentralized Cyber Criminal Collective Law enforcement agencies spanning 28 countries...

Olympique de Marseille Cyberattack – CyberMaterial

Olympique de Marseille Thwarts Cyberattack During Training Break The Olympique de Marseille football club has...