HomeCII/OTGenerative AI Projects Create Significant Cybersecurity Risks for Enterprises

Generative AI Projects Create Significant Cybersecurity Risks for Enterprises

Published on

spot_img

Large language model (LLM)-based technologies like ChatGPT are posing significant security threats in the open source development space and across the software supply chain, according to new research by Rezilion. Despite their rapid adoption among the open source community, these technologies are overall insecure and present a substantial security risk for organizations.

The report by Rezilion, released on June 28, explores the current attitude of the open source landscape to LLMs, focusing on their popularity, maturity, and security posture. The researchers found that despite there being over 30,000 GPT-related projects on GitHub alone, the initial projects being developed are insecure. This increases the threat and security risk for organizations as they rely on these projects to develop generative AI-based technology for the enterprise.

Yotam Perkal, director of vulnerability research at Rezilion, emphasizes the urgent need for improvement in security standards and practices surrounding LLMs. He warns that without significant improvements, the likelihood of targeted attacks and the discovery of vulnerabilities in these systems will increase.

To assess the security of LLM-based open source projects, the research team investigated the 50 most popular projects on GitHub with Development time ranging from two to six months. They found that while these projects were popular, they had a generally low security rating due to their relative immaturity.

The researchers identified four key areas of generative AI security risk presented by the adoption of generative AI in the open source community. These include trust boundary risk, data management risk, inherent model risk, and basic security best practices.

Trust boundaries in open source development help establish zones of trust in which organizations have confidence in the security and reliability of an application’s components and data. However, the increased use of external resources by LLMs can be exploited by malicious actors, leading to elevated risks.

Data management risks, such as data leakage and training-data poisoning, expose enterprises to risk if not addressed by developers. LLMs can inadvertently leak sensitive information or even have their training data poisoned by threat actors to introduce vulnerabilities or biases.

Inherent underlying model risks account for two of the top LLM security problems: inadequate AI alignment and overreliance on LLM-generated content. These risks can result in false or fabricated data sources, known as “hallucinations,” which can open organizations to supply-chain attacks.

The adoption of generative AI in open source also presents general security best-practice risk, such as improper error handling and insufficient access controls. Attackers can exploit LLM error messages for sensitive information and system details, and insufficient access controls can allow unauthorized actions within the system.

To mitigate these risks, Rezilion researchers recommend adopting a secure-by-design approach when implementing generative AI-based systems. This includes leveraging existing frameworks like the Secure AI Framework (SAIF) and regularly monitoring and auditing LLM interactions for potential security and privacy issues.

Organizations must be aware of the unique challenges and security concerns related to integrating generative AI and LLMs, and the responsibility for preparing and mitigating these risks lies with both organizations and developers involved.

In conclusion, the rapid adoption of generative AI and LLM-based technologies in the open source community presents significant security risks. Organizations must take a secure-by-design approach and implement proper monitoring and auditing to mitigate these risks and protect sensitive information and systems.

Source link

Latest articles

Man dies in Nizamabad Cyber Crime police custody, case registered

A tragic incident unfolded in Nizamabad as 31-year-old Alakunta Sampath passed away while under...

Symantec Shows OpenAI’s Operator Agent in Proof of Concept Phishing Attack

Symantec’s recent findings have shed light on the potential cybersecurity threats posed by the...

The Cyber Insurance Conundrum by Cyber Defense Magazine

Cybersecurity teams, under the guidance of Chief Information Security Officers (CISOs), are facing constant...

Live Ransomware Demo: Witness Hackers Breaching Networks and Demanding Ransom

The evolution of cyber threats continues to present challenges for individuals and organizations alike....

More like this

Man dies in Nizamabad Cyber Crime police custody, case registered

A tragic incident unfolded in Nizamabad as 31-year-old Alakunta Sampath passed away while under...

Symantec Shows OpenAI’s Operator Agent in Proof of Concept Phishing Attack

Symantec’s recent findings have shed light on the potential cybersecurity threats posed by the...

The Cyber Insurance Conundrum by Cyber Defense Magazine

Cybersecurity teams, under the guidance of Chief Information Security Officers (CISOs), are facing constant...