The generative AI boom has made its way into the world of DevSecOps, raising concerns among some as the technology continues to grow. In the past two months, software supply chain security and observability vendors have introduced updates and new products that incorporate natural language processing interfaces into DevSecOps automation and software bill of materials (SBOM) analysis software. Dynatrace, Aqua Security, and startup Lineaje.ai are among the vendors that have incorporated generative AI-based interfaces into their security and vulnerability management tools.
According to a survey conducted by Sonatype, a software supply chain security vendor, 97% of DevOps and SecOps leaders are already using generative AI, with 74% of them feeling pressure to adopt it. Software engineers use generative AI for researching libraries and frameworks, writing code, and testing and analyzing code for security issues. While the adoption rate is high, there are concerns about the security risks associated with generative AI.
The Open Source Security Foundation and national cybersecurity leaders recently gathered in Washington D.C. to discuss the risks and pitfalls of generative AI in enterprise environments. They called for improvements in AI security across all areas. The debate on the potential of generative AI for DevSecOps has sparked discussion among professionals in the field.
DevOps and SecOps professionals surveyed by Sonatype have differing opinions on generative AI. Developers tend to view it more cynically than security leads, with 61% of developers considering the technology to be overhyped compared to 37% of security leads. Some professionals express caution about the security implications of generative AI, while others believe it has the potential to enhance software development by catching security problems and scaling skillsets.
While some skepticism remains, vendors such as Lineaje, Aqua Security, and Dynatrace believe that generative AI can be a valuable tool when used in combination with other forms of AI data processing, automation, and analytics. Lineaje, for example, collects extensive data on software components listed in an SBOM and uses generative AI bots to prioritize vulnerabilities and recommend remediation actions. Aqua Security’s AI-Guided Remediation feature speeds up interactions with data insights for problem-solving, while Dynatrace aims to provide a user-friendly interface for data insights using generative AI.
Despite the potential benefits, concerns about the security of generative AI persist. Some SBOM vendors have warned about the security risks associated with generative AI, stating that the open-source ecosystem lacks the necessary maturity and security posture to safeguard these models. However, researchers at Endor Labs remain optimistic about the value of generative AI for automating DevSecOps workflows, including the detection and remediation of vulnerable open source dependencies.
In conclusion, the integration of generative AI into DevSecOps tools and processes has generated both excitement and caution among professionals. While generative AI shows promise in enhancing software development and security, there are concerns about its maturity and potential security risks. Ongoing discussions and research will be crucial in understanding the full implications and benefits of generative AI in the context of DevSecOps.
