HomeCyber BalkansMost popular generative AI projects on GitHub have low security levels.

Most popular generative AI projects on GitHub have low security levels.

Published on

spot_img

Software supply chain security firm Rezilion has recently conducted an investigation into the security vulnerabilities of the most popular generative artificial intelligence (AI) projects on GitHub. The research has revealed that as these projects gain popularity and become more recent, their security measures become increasingly immature. Rezilion utilized the Open Source Security Foundation (OpenSSF) Scorecard to assess the security state of the open-source ecosystem for large language models (LLMs), drawing attention to the significant gaps in security practices and potential risks associated with many LLM-based projects. The research findings, titled “Expl[AI]ning the Risk report,” were authored by Yotam Perkal and Katya Donchenko and have been made available to the public.

Generative AI technology, which is based on LLMs, has experienced a rapid rise in popularity, enabling machines to generate realistic human-like text, images, and even code. Consequently, the number of open-source projects incorporating these technologies has grown exponentially. For instance, there are currently over 30,000 open-source projects on GitHub that utilize the GPT-3.5 family of LLMs, despite the fact that OpenAI’s ChatGPT was only introduced to the public seven months ago.

Despite their increasing demand, generative AI and LLM technologies bring about security concerns, which range from the potential risks associated with sharing sensitive business information with advanced self-learning algorithms, to malicious actors exploiting these technologies to bolster their attacks. In fact, the Open Worldwide Application Security Project (OWASP) recently released a list of the top 10 most critical vulnerabilities typically observed in LLM applications. This list emphasizes the potential impact, ease of exploitation, and prevalence of these vulnerabilities. Examples of vulnerabilities identified include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution.

To assess the security of open-source projects and aid in their improvement, the OpenSSF has developed the OpenSSF Scorecard. This tool utilizes various metrics, such as the number of vulnerabilities, frequency of maintenance, and presence of binary files, to evaluate the security of a given repository. By conducting an assessment with Scorecard, different aspects of the software supply chain are examined, including source code, build dependencies, testing, and project maintenance.

The primary goal of these checks is to ensure adherence to security best practices and industry standards. Each check is associated with a risk level, indicating the estimated risk incurred by not following a particular best practice. The scores for individual checks are then combined into an aggregate score, providing an overall evaluation of the project’s security posture.

At present, the OpenSSF Scorecard consists of 18 checks categorized into three themes: holistic security practices, source code risk assessment, and build process risk assessment. Each check is assigned an ordinal scale from 0 to 10, as well as a risk level score. A project with a score nearing 10 signifies a highly secure and well-maintained posture, while a score approaching 0 highlights a weak security posture, inadequate maintenance, and increased susceptibility to open-source risks.

The Rezilion researchers’ investigation has shed light on the security vulnerabilities associated with popular generative AI projects. The findings emphasize the importance of implementing robust security practices and adhering to industry standards to mitigate potential risks. As generative AI technology continues to evolve, it is crucial for developers and researchers to prioritize security measures to safeguard against malicious exploitation and protect sensitive information. Additionally, the OpenSSF Scorecard presents a valuable tool for evaluating and enhancing the security posture of open-source projects, contributing to the overall maturity of the generative AI ecosystem.

Source link

Latest articles

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...

Fortinet Warns of Active Exploitation of FortiOS SSL VPN 2FA Bypass Vulnerability

 Fortinet on Wednesday said it observed "recent abuse" of a five-year-old security flaw in FortiOS...

More like this

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...