HomeCII/OTIs it possible for GenAI to write secure code?

Is it possible for GenAI to write secure code?

Published on

spot_img

The progress and development of generative artificial intelligence (GenAI) in learning how to code like a human developer is a significant technological advancement. However, with this speed comes the challenge of picking up mistakes along the way, leading to potential vulnerabilities in the code.

According to Chris Wysopal, CTO and co-founder of Veracode, GenAI and large language models (LLMs) learn coding from open source models, which themselves may have flaws. Speaking at the Dark Reading News Desk during Black Hat USA 2024, Wysopal highlighted that GenAI can write code at a much faster pace, resulting in more code and vulnerabilities being produced in the same amount of time.

The key challenge then becomes identifying and rectifying these vulnerabilities at a similar speed to keep up with the rate of code generation. In his research presented at the conference, titled “From HAL to HALT: Thwarting Skynet’s Siblings in the GenAI Coding Era,” Wysopal suggested using AI to detect and fix vulnerabilities in AI-generated code, but acknowledged that this solution is still a work in progress.

Current research indicates that among the existing models, an LLM named StarCoder excels in writing code with relatively fewer vulnerabilities. While it’s not flawless, StarCoder is considered the leading model in this aspect. Additionally, ChatGPT 4.0 and ChatGPT 2.5 have also shown proficiency in generating secure code.

Wysopal emphasized the importance of specialized LLMs that are specifically trained to produce secure code, as general-purpose LLMs may not be as effective in addressing security bugs. He mentioned that Veracode, along with other companies, is actively working on developing such specialized models to enhance the overall security of AI-generated code.

The continuous evolution and adoption of GenAI in coding processes present both opportunities and challenges for the cybersecurity industry. While the speed and efficiency of AI-powered coding are beneficial, ensuring the security and integrity of the code remains a critical priority. By investing in research and developing specialized LLMs focused on secure coding practices, organizations can navigate the complexities of GenAI and harness its full potential in a safe and secure manner.

Source link

Latest articles

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...

Fortinet Warns of Active Exploitation of FortiOS SSL VPN 2FA Bypass Vulnerability

 Fortinet on Wednesday said it observed "recent abuse" of a five-year-old security flaw in FortiOS...

More like this

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...