In a recent study conducted by penetration testing firm Cobalt, it has been revealed that enterprises are facing challenges in resolving vulnerabilities identified during penetration tests. The study highlights that organizations are only fixing less than half (48%) of all exploitable vulnerabilities, and this number drops significantly to 21% when it comes to flagged generative AI (gen AI) application flaws.
Furthermore, the study indicates that vulnerabilities rated as high or critical severity in security audits are more likely to be resolved, with a resolution rate of 69%. Over the years, there has been a significant decrease in the median time to resolve serious vulnerabilities, dropping from 112 days to 37 days last year. This shift is attributed to the positive impact of “shift left” security programs, as mentioned by Cobalt.
Organizations face various challenges when it comes to patching vulnerabilities. Some choose to accept certain risks due to operational disruptions or high costs associated with resolving vulnerabilities. Poor remediation planning and resource limitations also contribute to slow patching processes, especially when dealing with legacy software or hardware that cannot be easily updated or replaced.
The latest State of Pentesting Report from Cobalt highlights that most firms have conducted penetration testing on large language model (LLM) web apps, with a significant number of tests uncovering vulnerabilities warranting a serious rating. The report also emphasizes that a variety of LLM flaws, including prompt injection, model manipulation, and data leakage, have been identified, with only 21% of these flaws being fixed.
Experts in the field of cybersecurity have noted that organizations are often slow to address known vulnerabilities, not due to a lack of awareness but rather as a result of competing priorities and resource constraints. Vulnerability mitigation gets delayed as security teams are overwhelmed, engineering teams prioritize feature releases, and there is a lack of urgency in resolving known issues unless regulatory pressure or a breach occurs.
The introduction of generative AI applications presents additional complexities in vulnerability remediation, as these apps are built quickly using new frameworks and third-party tools that may not have undergone thorough testing in production environments. Resolving vulnerabilities in gen AI apps can be time-consuming and complex, especially when dealing with the AI’s neural network component.
To improve remediation rates and prioritize security fixes, experts suggest integrating security tooling earlier in the development process, setting performance measures for resolving serious vulnerabilities, and establishing clear ownership for remediation efforts. Security professionals should focus on addressing the riskiest vulnerabilities, reducing technical debt, and prioritizing fixes for vulnerabilities exposed directly to the internet.
In conclusion, the challenges faced by enterprises in resolving vulnerabilities uncovered in penetration tests require a comprehensive approach that addresses technical, organizational, and cultural factors. With the rapid advancement of AI technologies, it is essential for organizations to adapt their security strategies to effectively mitigate vulnerabilities and protect their systems from potential cyber threats.