CyberSecurity SEE

When It Comes to Secure Coding, ChatGPT Embodies Human-Like Aptitude

When It Comes to Secure Coding, ChatGPT Embodies Human-Like Aptitude

The rise of AI tools in the coding world has been met with both excitement and concern. While these tools can generate code at lightning speed, security experts have raised alarms about the poor quality and vulnerability of the code being produced. This has led to an increased risk of insecure apps and web development hitting unsuspecting consumers.

Not only are these AI tools being used for legitimate purposes, but they are also being exploited by cybercriminals. Hackers are finding ways to use these tools for nefarious purposes such as phishing, creating deep fake scam videos, and developing malware. With lower barriers to entry, these disruptive activities can now be achieved much faster, posing a significant threat to online security.

While many tout these AI coding tools as revolutionary, it is crucial to consider the risks beyond the headlines. Large language models (LLM) AI technology has the potential to change the way we approach various aspects of work, not just software development. However, it is essential to acknowledge the flaws and limitations of these tools.

One significant flaw is the prevalence of poor coding patterns in the code generated by AI tools like ChatGPT. These tools are trained on decades of existing code, which means they are also prone to common coding pitfalls. Without a security-aware driver, the generated code may not be accurate or functional from a security perspective. In fact, these tools can even make up nonexistent libraries, leaving room for threat actors to exploit vulnerabilities.

Moreover, the coding community has not placed enough emphasis on security awareness and preparedness. Developers are not adequately trained to write secure code as a default state. The enormous amount of training data fed into AI coding tools like ChatGPT reflects this lack of security awareness and can lead to lackluster security results. It falls upon developers to identify security bugs and either fix them themselves or design better prompts for a more robust outcome.

A large-scale user study conducted by researchers at Stanford University supports the notion that developers need to improve their security skills when using AI coding assistants. Interactions with AI tools for various security-related functions revealed a need for more security-aware development practices. This study highlights the importance of raising the bar for code quality and security in an AI-driven coding environment.

The road to a potential data breach disaster is paved with good intentions. AI coding companions are popular among developers who face increasing responsibility, tighter deadlines, and the pressure to innovate. However, a lack of actionable security awareness when using AI tools for coding can result in significant security problems. While these tools can generate code faster, they also increase the speed of technical security debt. Organizations must understand the implications of using untrained individuals to generate code and the potential risks associated with it.

Even preliminary tests with ChatGPT have shown basic coding mistakes that could have devastating consequences. For example, when asked to build a login routine in PHP using a MySQL database, the generated code had rookie errors such as storing passwords in plaintext and using coding patterns vulnerable to SQL injection. While such mistakes can be corrected with adequate prompting and security knowledge, it is evident that unchecked and widespread use of AI tools can pose serious security risks.

Although AI/ML capabilities are expected to improve over time, the security skill required to track down complex security errors will also increase. Organizations must acknowledge the lack of practical security knowledge among developers and provide the necessary tools and education to bridge this gap. It is crucial to be prepared not only for existing security bugs but also for new AI-borne issues that may emerge.

While AI coding tools represent the future of a developer’s arsenal, the focus should be on ensuring that developers are equipped with the right knowledge and skills to use these tools safely. The integration of security awareness and education is essential to mitigate risks and protect against cyber threats.

Source link

Exit mobile version