HomeCII/OTCan Generative AI Be Relied Upon to Fix Your Code?

Can Generative AI Be Relied Upon to Fix Your Code?

Published on

spot_img

The race to incorporate artificial intelligence (AI) technologies into cybersecurity programs and tools is intensifying among organizations worldwide. A significant majority of developers, about 65%, already use or plan to use AI in testing efforts within the next three years. While there are numerous security applications that can benefit from generative AI, the question remains whether it is suitable for fixing code vulnerabilities.

For many DevSecOps teams, generative AI represents the ultimate solution to overcome their mounting vulnerability backlogs. More than half (66%) of organizations reported backlogs comprising over 100,000 vulnerabilities, with more than two-thirds of static application security testing (SAST) findings remaining unresolved even three months after detection. Shockingly, 50% of these vulnerabilities remain open for as long as 363 days. DevSecOps teams hope that leveraging generative AI would allow developers to simply instruct programs like ChatGPT to “fix this vulnerability,” minimizing the laborious hours and days spent on vulnerability remediation.

In theory, the concept of using machine learning and AI to fix code appears promising. AI has been effectively applied to automate processes and save time in cybersecurity tools for years, particularly in tasks that are simple and repetitive. However, when it comes to applying generative AI to complex code applications, practical flaws emerge. Without adequate human oversight and careful instructions, DevSecOps teams risk exacerbating the existing problems rather than resolving them.

The advantages and limitations of generative AI related to fixing code vulnerabilities are worth considering. AI tools can be exceptionally powerful in simple, low-risk cybersecurity analysis, monitoring, or even basic remedial tasks. However, the level of complexity increases when dealing with consequential vulnerabilities. Ultimately, trust becomes the paramount concern.

Researchers and developers are still exploring the full capabilities of generative AI technology to produce complex code fixes. Generative AI predominantly relies on existing information to make decisions. This approach can be effective when translating code between programming languages or fixing well-known flaws. For instance, if a developer asks ChatGPT to convert JavaScript code into Python, it is likely to produce an accurate result. Similarly, the AI can assist in fixing cloud security configuration issues by following readily available documentation and simple instructions.

However, addressing most code vulnerabilities involves unique circumstances and intricate details, making it a more challenging scenario for AI to handle. AI solutions may provide a “fix,” but without thorough verification, it should not be trusted. Generative AI cannot create anything that is not already known, and it can produce false outputs due to hallucinations.

A recent example highlighted the risks associated with relying solely on AI-generated content. A lawyer faced significant consequences after employing ChatGPT to assist in drafting court filings that referenced six fictitious cases created by the AI tool. If the same were to occur in the realm of coding, valuable time would be wasted on a “fix” that cannot be compiled. OpenAI’s GPT-4 whitepaper also acknowledges the risk of new exploits, jailbreaks, and emergent behaviors that may emerge over time, posing significant challenges in preventing them. Therefore, careful consideration is vital to ensure that AI security tools and third-party solutions are thoroughly vetted and consistently updated to avoid them unintentionally becoming backdoors into the system.

Interestingly, the rapid adoption of generative AI coincides with the growing popularity of the zero-trust movement in cybersecurity. Most cybersecurity tools are built on the principle of “never trust, always verify.” In contrast, generative AI depends on inherently trusting the information provided by known and unknown sources. This clash in principles highlights the ongoing challenge faced by organizations in striking the right balance between security and productivity.

While generative AI may not presently fulfill the lofty expectations held by DevSecOps teams, it can still contribute to incremental progress in reducing vulnerability backlogs. For now, it can be effectively utilized to address simple fixes. However, for more complex vulnerabilities, a verify-to-trust methodology that leverages the power of AI guided by the expertise of the developers who wrote and own the code is necessary. This approach ensures that the advantages of generative AI are maximized while mitigating the risks associated with trusting it entirely.

Source link

Latest articles

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...

Fortinet Warns of Active Exploitation of FortiOS SSL VPN 2FA Bypass Vulnerability

 Fortinet on Wednesday said it observed "recent abuse" of a five-year-old security flaw in FortiOS...

More like this

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...