New research conducted by software supply chain management company Sonatype has revealed the growing influence of generative AI on the work of software engineers and the software development life cycle. The study, which surveyed 800 developer (DevOps) and application security (SecOps) leaders, found that an overwhelming majority (97%) are currently using generative AI in their work. However, three-quarters of these respondents (74%) also admitted that they feel pressure to use the technology, despite being aware of the associated security risks.
The survey found that the biggest concern among those using generative AI is the security risks it poses. This highlights the urgent need for responsible AI adoption that can improve both software development and security. While acknowledging the risks, respondents also recognized the potential benefits of the technology. DevOps leaders reported that generative AI enables faster software development (16%) and more secure software (15%). On the other hand, SecOps leaders saw increased productivity (21%) and faster issue identification/resolution (16%) as the top benefits of generative AI.
The survey also revealed some interesting differences between DevOps and SecOps teams in terms of adoption and productivity. For instance, nearly half (45%) of SecOps leads have already implemented generative AI into the software development process, compared to less than one third (31%) of DevOps leaders. Moreover, SecOps leads reported greater time savings than their DevOps counterparts, with 57% claiming that generative AI saves them at least 6 hours a week, compared to only 31% of DevOps respondents.
An issue that divided DevOps and SecOps leaders was their perception of the potential vulnerabilities in open source code resulting from the use of generative AI. More than three-quarters of DevOps leads expressed concerns about this, while surprisingly, only 58% of SecOps leads shared the same worries. The survey also found that a lack of regulation could deter developers from contributing to open-source projects, with 42% of DevOps respondents and 40% of SecOps leads considering it a potential concern.
When asked about who should be responsible for regulating the use of generative AI, both DevOps and SecOps leaders agreed that it should be a shared responsibility between the government and individual companies. 59% of DevOps leads and 78% of SecOps leads expressed this sentiment, highlighting the need for a collective effort in ensuring the safe and secure adoption of AI technology.
Brian Fox, Co-founder and CTO at Sonatype, emphasized the importance of adopting generative AI with a focus on safety and security. He compared the current AI era to the early days of open source, where risks were present and new regulations were required. Fox urged developers and application security leaders to be cautious and mindful of the security threats posed by this nascent technology.
Apart from security concerns, the licensing and compensation debate surrounding generative AI was also a major topic of discussion among both DevOps and SecOps leaders. Without proper licensing and compensation, developers could find themselves in a legal limbo, facing plagiarism claims against their work. Recent rulings against copyright protection for AI-generated art have raised questions about the necessary level of human input to meet the current definition of true authorship. Respondents agreed that creators should own the copyright for AI-generated output in the absence of copyright law (40%). Additionally, both groups overwhelmingly agreed that developers should be compensated for the code they write if it is used in open-source artifacts in Large Language Models (LLMs), with 93% of DevOps leaders and 88% of SecOps leaders supporting this notion.
The research conducted by Sonatype provides valuable insights into the impact of generative AI on software engineering and the software development life cycle. It highlights the need for responsible adoption of AI technology and emphasizes the importance of prioritizing security in the development process. With the widespread use of generative AI and the associated risks, there is a pressing need for regulation and protections to safeguard the interests of developers and ensure the integrity of open source projects. By addressing these concerns and promoting responsible AI adoption, the software development community can harness the full potential of generative AI while minimizing security risks.

