CyberSecurity SEE

Veracode highlights security risks of GenAI coding tools

Veracode highlights security risks of GenAI coding tools

In a session led by Veracode at Black Hat USA 2024, developers were encouraged to exercise caution when relying on AI tools and to prioritize rigorous application security testing. Chris Wysopal, the CTO and co-founder of Veracode, addressed the audience with a session titled “From HAL to HALT: Thwarting Skynet’s Siblings in the GenAI Coding Era.” At the heart of the discussion was the growing trend of developers utilizing large language models (LLM) such as Microsoft Copilot and ChatGPT to generate code.

Wysopal emphasized the challenges associated with this practice, including the rapid pace at which code is being produced, potential issues with data poisoning, and an overreliance on AI to create secure code. Despite acknowledging the benefits of LLMs, Wysopal underscored the importance of maintaining a healthy skepticism towards AI and ensuring that proper security testing measures are in place.

During a conversation with TechTarget Editorial prior to the session, Wysopal elaborated on the rise of generative AI (GenAI) tools and the need for enhanced security testing in light of this trend. While he recognized the value of using AI to identify and address vulnerabilities in code, he cautioned against complacency when it comes to security protocols.

Two Veracode studies highlighted during the session shed light on the impact of LLM-generated code on the cybersecurity landscape. One study involved comparing software written by humans to code generated by LLMs, revealing that a significant percentage of AI-generated code contained vulnerabilities. The findings underscored the importance of thorough security assessments in the face of AI-driven coding practices.

Concerns raised by Veracode echoed sentiments expressed by other security vendors regarding the potential risks associated with GenAI coding assistants. Reports from Snyk researchers earlier in the year indicated that tools like GitHub Copilot often replicated existing vulnerabilities in users’ codebases, underscoring the need for vigilant security measures in the face of evolving coding practices.

The proliferation of AI-generated code presents new challenges for enterprises, particularly as cybersecurity threats continue to evolve. Wysopal expressed apprehension about the rapid increase in coding velocity facilitated by LLMs, warning that security teams may struggle to keep pace with the growing volume of vulnerabilities in code.

Moreover, the threat of poisoned data sets poses a potential risk to AI-driven coding practices. Wysopal raised concerns about the possibility of threat actors manipulating open source projects to introduce insecure code into LLM training data, potentially leading to an increase in vulnerabilities for attackers to exploit.

As the industry grapples with the implications of AI-generated code, Wysopal highlighted the need for proactive measures to mitigate potential risks. He emphasized the importance of developing new AI defenses to counter emerging threats, including the possibility of AI-generated attacks.

In the face of these challenges, Wysopal underscored the need for a paradigm shift in how software is built and secured. As the prevalence of LLM-generated code continues to rise, developers must remain vigilant in their security practices and adapt to the changing landscape of AI-driven coding.

In conclusion, Veracode’s session at Black Hat USA 2024 served as a timely reminder of the complexities and risks associated with the increased use of AI tools in coding practices. The call to prioritize security testing and exercise caution with AI underscores the ongoing need for thoughtful and proactive cybersecurity measures in an era of rapidly evolving technology.

Source link

Exit mobile version