HomeRisk ManagementsResearchers Raise Concerns About Vulnerabilities in AI-Generated Code

Researchers Raise Concerns About Vulnerabilities in AI-Generated Code

Published on

spot_img

Georgia Tech Warns of Rising Vulnerabilities Linked to AI Coding Tools

Researchers from Georgia Tech have raised alarms about a surge in vulnerabilities associated with AI coding tools, such as Anthropic’s Claude Code. Their findings indicate a concerning trend: in March 2026 alone, at least 35 new Common Vulnerabilities and Exposures (CVEs) have been reported, a significant increase from just six in January and 15 in February.

This research is part of the ongoing ‘Vibe Security Radar’ project, initiated by the Systems Software & Security Lab (SSLab) at Georgia Tech’s School of Cybersecurity and Privacy in May 2025. The project’s primary goal is to systematically track vulnerabilities that arise from AI-generated code and make them known through public advisories. These advisories include prominent databases like the U.S. National Vulnerability Database (NVD), GitHub Advisory Database (GHSA), and Open Source Vulnerabilities (OSV), among others.

Hanqing Zhao, the founder of the Vibe Security Radar, emphasized the need for robust tracking mechanisms in an interview with Infosecurity. He stated, “Everyone claims that AI code is insecure, but no one is actually tracking it. We aim to provide real numbers. Not benchmarks, not hypotheticals, but tangible vulnerabilities impacting real users.” Zhao noted the urgency of such tracking as an increasing number of developers are deploying "vibe coding" methods, which involve directly integrating AI-generated code into production environments.

Zhao pointed out a stark reality: even in software teams that carry out code reviews, it is virtually impossible to catch every mistake when a significant portion of the codebase is machine-generated. This is a critical concern as developers increasingly rely on such tools, which can lead to a broader range of vulnerabilities slipping through the cracks.

Monitoring AI Tools and Their Vulnerabilities

The Vibe Security Radar currently monitors approximately 50 AI-assisted coding tools, including well-known platforms like Claude Code, GitHub Copilot, Cursor, and Google Jules. To set up the Vibe Security Radar dashboard, researchers begin by extracting data from recognized vulnerability databases. They identify the commits that address each vulnerability and trace them back to determine the original introduction of the error.

Zhao explained the methodology: “If a commit carries markers identifying an AI tool’s involvement—like a co-author tag or a bot email—we flag it.” The analytical process doesn’t stop there; the team employs AI agents to delve deeper into each vulnerability, assessing whether the AI-generated code played a role. These agents have access to the Git repository and the commit history, enabling thorough investigations rather than mere pattern matching.

Out of the 74 confirmed CVEs directly attributed to AI coding tools, Zhao noted that Claude Code has been frequently implicated. This is primarily attributed to the fact that Claude Code “always leaves a signature,” making it easier to identify. In contrast, other tools like Copilot often provide suggestions that do not leave clear traces, complicating their monitoring.

The Hidden Scope of Vulnerabilities

Despite the data collected, Zhao acknowledged that the actual number of CVEs caused by AI coding tools is likely much higher than what is reported on the Vibe Security Radar dashboard. He estimated that the actual figures may range from 400 to 700 cases throughout the open-source ecosystem, significantly surpassing the observations based on visible metadata traces.

A case in point is OpenClaw, a project with over 300 security advisories, which heavily relies on vibe coding. Recognizing the limitations of their tracking mechanisms, Zhao highlighted that many AI tool traces have been stripped by project authors, making it challenging to confirm vulnerabilities linked to AI.

Moreover, there exists a plethora of vulnerabilities that fail to receive public identifiers like CVE or GHSA numbers, rendering them difficult to track effectively. Zhao expressed a strong belief that the instances of AI-induced vulnerabilities are on the rise, stating, "Last month, Claude Code alone accounted for over 4% of public commits on GitHub, and this number is still climbing. More AI code means more AI-introduced vulnerabilities."

Forward-Looking Actions and Improvements

The Vibe Security Radar represents a long-term initiative with ongoing developments to enhance its functionality. Currently reliant on metadata such as co-author tags and bot emails, Zhao and his team are already looking ahead. They aim to adopt a more comprehensive approach that examines project-wide commit patterns and the general coding style. AI-generated code, Zhao noted, exhibits distinctive characteristics that could be detected without explicit metadata indicators.

In conclusion, the research from Georgia Tech emphasizes the urgent need for vigilance in coding practices as AI tools become increasingly integral to software development. The broader implications of these findings are critical for developers and organizations alike, as they strive to balance innovation with security in an ever-evolving technological landscape.

Source link

Latest articles

Breach Roundup: Tycoon2FA Phishing Platform Makes a Comeback

Cybersecurity Roundup: Global Incidents Unpacked In a comprehensive overview of the latest cybersecurity threats, Information...

Rapid Exploitation of CVE-2026-21962 Targets Oracle WebLogic

Immediate Exploitation of Oracle WebLogic Vulnerability: A Warning Call for Organizations A recent analysis utilizing...

A CISO’s Guide to Addressing Shadow AI

The Expanding Concerns of AI Risk: Insights on Shadow AI Usage In today’s digital landscape,...

Why Misaligned Incentives Present the Biggest Challenge for CISOs

Advanced SOC Operations / CSOC, Artificial Intelligence...

More like this

Breach Roundup: Tycoon2FA Phishing Platform Makes a Comeback

Cybersecurity Roundup: Global Incidents Unpacked In a comprehensive overview of the latest cybersecurity threats, Information...

Rapid Exploitation of CVE-2026-21962 Targets Oracle WebLogic

Immediate Exploitation of Oracle WebLogic Vulnerability: A Warning Call for Organizations A recent analysis utilizing...

A CISO’s Guide to Addressing Shadow AI

The Expanding Concerns of AI Risk: Insights on Shadow AI Usage In today’s digital landscape,...