Unpatched software vulnerabilities continue to pose a significant cybersecurity threat, with data breaches resulting from known vulnerabilities costing an average of $4.17 million, as reported by IBM’s “Cost of a Data Breach Report 2023.” The issue lies in the delay in patching software flaws once they are identified, allowing threat actors to exploit them quickly. According to Verizon’s “2024 Data Breach Investigations Report,” malicious scanning activity typically begins within five days of a critical vulnerability being published, while nearly half of critical vulnerabilities remain unpatched two months after fixes are made available.
To address this ongoing challenge, some cybersecurity experts are turning to Generative AI as a potential solution. This technology has shown promise in not only identifying bugs but also automatically generating and testing patches to fix them. Google’s large language model (LLM) has already demonstrated some success in remediating 15% of simple software bugs in internal experiments.
At the RSA Conference (RSAC) 2024, Elie Bursztein, cybersecurity technical and research lead at Google DeepMind, shared insights on the use of AI in security efforts. His team is actively exploring various AI security applications, including using Google’s LLM to find and patch vulnerabilities in their codebase. The ultimate goal is to reduce or eliminate the need for manual patching by leveraging AI-driven solutions.
In a recent experiment, Bursztein’s team tasked a Gemini-based AI model with identifying and patching simple vulnerabilities within the Google codebase. The AI-generated patches were reviewed by engineers, with 15% ultimately approved and integrated into Google’s codebase. This automated patching process significantly reduced the time and effort required to address software bugs, with potential savings in engineering resources for future bug fixes.
While the results of the AI patching experiment show promise, Bursztein emphasized that there are still significant challenges to overcome. The AI technology is not yet capable of autonomously fixing the majority of bugs, with complexities in fixing different types of vulnerabilities and the validation process for AI-suggested patches. Training the AI model to avoid problematic behaviors and ensuring that patches do not inadvertently introduce new issues require extensive data sets and manual intervention.
Despite these challenges, Bursztein remains optimistic about the potential of AI in driving bug discovery and patch management in the future. He believes that with continued research and development, AI could eventually help reduce vulnerability windows and improve overall cybersecurity practices. The journey towards fully autonomous bug discovery and patching may be complex, but the benefits could be substantial for organizations looking to enhance their security posture.
In conclusion, the use of Generative AI in addressing software vulnerabilities presents a promising opportunity to enhance cybersecurity practices and reduce the risk of data breaches. While there are challenges to overcome, the potential benefits of AI-driven patching solutions are significant, offering hope for a more robust and secure digital landscape in the future.
