DomCII/OTZakrpe generirane umjetnom inteligencijom mogu smanjiti radno opterećenje programera i operacija

Zakrpe generirane umjetnom inteligencijom mogu smanjiti radno opterećenje programera i operacija

Objavljeno na

spot_img

Large language models (LLMs) are offering a tantalizing prospect of speeding up software development by efficiently identifying and addressing common types of bugs, providing a significant efficiency gain for developers. Google’s Gemini LLM, for instance, has been successful in repairing 15% of bugs found using dynamic application security testing (DAST) techniques. Although this is a small percentage, it still presents a promising advancement in dealing with the large number of vulnerabilities identified annually.

Google’s approach focuses on fixing vulnerabilities detected using sanitizers. Sanitizers are DAST tools that instrument an application and replace memory functions to allow for error detection and reporting. Due to the fact that these vulnerabilities are not blocking the release of software, they are often considered less critical and therefore of less priority to developers. However, Google’s Gemini LLM has demonstrated the ability to effectively detect and address bugs identified through sanitizers.

The success of Gemini LLM has the potential to help companies eliminate some of the backlog of vulnerabilities that need fixing. According to Jan Keller, a technical program manager with Google and co-author of a recently published paper, the approach could significantly contribute to the backlog reduction.

One of the notable advantages of Google’s method is that the AI not only suggests patches but also enables testing of the patch candidates using automation. This process is essential in ensuring that the patches are reliable and efficient. As Chris Eng, chief research officer for Veracode, states, a narrow approach to honing in on a particular class of vulnerabilities may yield better success. Google’s automated approach to testing patches presents a significant advantage in providing a higher degree of confidence in the effectiveness of the patches produced compared to other traditional methods.

Furthermore, AI-generated patching holds promise not only for development but also for operations. AI/ML models could assist in creating fixes for discovered vulnerabilities during development and applying patches to systems as part of IT operations. These developments could significantly leverage intelligent automation and help in reducing the backlog of existing vulnerabilities, thus providing a more efficient approach to mitigating security risks.

The advancements in AI/ML models have the potential to transform the way software vulnerabilities are addressed. With the increasing use of machine learning models to identify bugs, AI-generated patching and testing could result in a more efficient resolution of vulnerabilities. The prospect of AI-driven patching offers a promising solution to the challenges associated with the large volume of vulnerabilities and the need for efficient and reliable patching. The success of AI-driven patching could ultimately lead to a more streamlined and automated process for improving software security and reducing the backlog of bugs and vulnerabilities.

Link na izvor

Najnoviji članci

The Cybersecurity Cat-And-Mouse Challenge

In the world of cybersecurity, the battle between threat actors and defenders is constantly...

Veza and HashiCorp collaborate to prevent credential exposure

Veza and HashiCorp have recently joined forces to tackle the evolving challenges of identity...

Feds Issue Warning to Health Sector on Patching Apache Tomcat Vulnerabilities

The healthcare sector faces a significant risk due to vulnerabilities in the open-source web...

Researchers uncover Chinese-aligned hacking group targeting over a dozen government agencies

A Chinese-speaking cyberespionage group known as SneakyChef has recently been identified by researchers with...

Još ovako

The Cybersecurity Cat-And-Mouse Challenge

In the world of cybersecurity, the battle between threat actors and defenders is constantly...

Veza and HashiCorp collaborate to prevent credential exposure

Veza and HashiCorp have recently joined forces to tackle the evolving challenges of identity...

Feds Issue Warning to Health Sector on Patching Apache Tomcat Vulnerabilities

The healthcare sector faces a significant risk due to vulnerabilities in the open-source web...
hrCroatian