A recent study conducted by researchers at Cornell University has raised concerns about the security of code generated by GitHub Copilot, an artificial intelligence (AI) tool used to assist developers in writing code. The study, which was published in August, examined the impact of using AI in code and evaluated the security vulnerabilities that may arise when relying on developers using GitHub Copilot.
According to the paper, GitHub Copilot continuously scans the program as the user adds lines of code and periodically uploads a subset of lines, cursor position, and metadata. The AI then generates code options for the user to insert based on the comments, docstrings, and function names in the program. Each code completion also comes with a numerical confidence score, with the highest-scoring option being presented as the default selection.
The researchers tested 1,692 programs generated in 89 different code-completion scenarios and found that 40% of them were vulnerable to security risks. The study concluded that developers should exercise caution when using GitHub Copilot and suggested that it should be paired with security-aware tools to minimize the risk of introducing vulnerabilities.
This study highlights the need for organizations to carefully consider the implementation of AI modules in their operating systems, API implementations, and code. It is important to recognize that the use of AI does not guarantee the security of the application or code by default. Instead, it introduces a different type of input that should be reviewed and managed with appropriate measures.
In the case of Microsoft AI inputs, such as Copilot for Windows, organizations have the advantage of native integration without additional costs. These AI modules can be managed using tools like Group Policy or Intune. Microsoft has provided guidelines on how to proactively manage Copilot in Windows through these management tools.
Taking a proactive approach to managing AI inputs is crucial to ensure the security and integrity of the code. Organizations should consider implementing security-aware tooling and conducting regular assessments to identify and mitigate any potential risks associated with the use of AI in their development processes.
It is important to recognize that AI is not a replacement for human expertise. Developers should remain vigilant and exercise critical thinking when using AI tools like GitHub Copilot. While these tools can greatly enhance productivity and code generation, they should not be solely relied upon for creating secure and robust applications.
In conclusion, the Cornell University study sheds light on the potential security vulnerabilities that may arise from using GitHub Copilot. It emphasizes the need for developers and organizations to be mindful of the risks associated with AI-generated code and to implement appropriate security measures. By taking a proactive approach and pairing AI tools with security-aware tooling, organizations can mitigate the risk of introducing vulnerabilities into their codebase.

