CyberSecurity SEE

Companies and Developers Express Concerns Over Risks Associated with Generative AI

Companies and Developers Express Concerns Over Risks Associated with Generative AI

According to a recent survey conducted by development services firm GitLab, the majority of developers believe that utilizing generative AI systems is necessary to enhance productivity and keep up with software challenges. However, concerns regarding intellectual property issues and security vulnerabilities remain barriers to widespread adoption.

The survey revealed that 83% of developers consider adopting AI as essential to avoid falling behind in the industry. However, 32% expressed concerns about integrating AI into their development process. Among those concerned, nearly half (48%) worried that AI could undermine the intellectual property protections of their code. Additionally, 39% were apprehensive that AI-generated code would be more susceptible to security vulnerabilities. Furthermore, more than a third of developers feared that AI systems could potentially replace their jobs.

While developers acknowledge that generative AI systems could potentially increase their efficiency, they remain wary of the potential consequences. Josh Lemos, the Chief Information Security Officer (CISO) at GitLab, emphasized that privacy and data security concerns over large language models (LLMs) continue to impede their widespread adoption. He also highlighted the importance of understanding how to best leverage generative AI features and the need for developers to adopt a new approach to interacting with their codebase.

However, it is not just developers who have concerns regarding generative AI. Proofpoint’s recent report, “Cybersecurity: The 2023 Board Perspective,” revealed that over half (59%) of corporate board members are worried about generative AI, particularly regarding the leakage of confidential information uploaded by employees to services like ChatGPT. The report also highlighted concerns over attackers’ use of generative AI systems to enhance phishing attacks and other malicious techniques.

As a result, corporate boards are urging Chief Information Security Officers (CISOs) to strengthen their defenses. Ryan Witt, a resident CISO at Proofpoint, emphasized the critical role of generative AI as a tool for defenders, especially when employing large language models (LLMs). However, Witt also acknowledged that generative AI makes it easier for bad actors to formulate well-written phishing and business email campaigns. This poses a challenge as traditional indicators of phishing, such as grammatical errors, become less reliable.

Several companies, including Microsoft and Kaspersky, have already embraced generative AI as a means to accelerate the work of knowledge workers. These companies have developed services based on LLMs to augment security analysts’ tasks or for internal use. Similarly, providers of developer services like GitHub and GitLab have released generative AI systems to assist programmers in producing code more efficiently.

The GitLab survey indicated that developers expect efficiency gains (55%) and faster development cycles (44%) as a result of AI adoption. They also have high hopes for more secure code (40%) and anticipate both advantages and disadvantages in AI-generated code (39%). Developers are expected to be selective in their adoption of AI, embracing certain applications of generative AI while remaining cautious about others.

GitLab’s Lemos expressed enthusiasm for one specific application of generative AI: creating concise summaries from code updates or merge requests. He emphasized how this feature saves time by providing developers with a quick overview of relevant information without having to read through lengthy threads.

Despite concerns that generative AI systems may replace developers, the GitLab survey revealed that nearly two-thirds of companies hired employees specifically to manage AI implementations. The apprehensions regarding AI seem to vary across generations, with more experienced developers tending to reject AI-generated code suggestions, while junior developers are more likely to embrace them. Nonetheless, both groups recognize the potential of AI in assisting with tedious tasks such as documentation and creating unit tests.

While AI assists developers in mundane tasks, cyber attackers are also leveraging AI to enhance their techniques. Proofpoint’s Witt emphasized that AI technology should not be seen as favoring one side of the cybersecurity equation over the other. He envisions a future where AI-enhanced defenses constantly battle against AI-improved threats, requiring ongoing investment in AI technology for cybersecurity defenders to match their adversaries on the virtual battlefield.

In conclusion, while the majority of developers recognize the potential benefits of generative AI systems for increasing productivity, concerns surrounding intellectual property and security remain prevalent. Developers will likely adopt a more cautious approach to AI, selectively embracing applications that enhance their workflow while addressing potential risks. The adoption of generative AI will require collaboration between developers, CISOs, and corporate boards to ensure the effective implementation and defense against AI-enhanced threats.

Source link

Exit mobile version