In Washington this week, national cybersecurity and open source community leaders gathered to discuss the intersection of AI and open source security. This comes at a time when the tech industry is experiencing a surge in the use of AI, particularly with the rise of large language models (LLMs) and generative AI. However, the use of AI in cybersecurity also presents potential risks, which were discussed at the Secure Open Source Software (SOSS) Summit organized by the Open Source Security Foundation (OpenSSF).
During the summit, officials from the National Security Council, Office of the National Cyber Director, and the Cybersecurity and Infrastructure Security Agency engaged in discussions with community leaders about the need for a comprehensive secure software workbench for OSS developers. One of the key topics explored was the role of AI in improving cybersecurity and ensuring the security of open source AI models and packages.
The OpenSSF presented a list of objectives that included supply chain security of OSS packages used in AI, security of open-sourced AI packages, and the application of AI in the augmentation of security for OSS. These objectives emphasized the importance of ensuring that AI models and the associated open source components are secure and protected from potential threats.
JFrog, a vendor participant at the SOSS Summit, announced a new product update aimed at addressing the security concerns raised by the OpenSSF. JFrog’s ML Model Management feature offers static application security testing and applied security policies for AI models and the accompanying open source packages. This update allows organizations to identify and manage malicious machine learning models and software packages. Katie Norton, an analyst at IDC, noted that JFrog’s focus on securing AI/ML models sets them apart from other DevOps vendors.
The need for secure AI is underscored by a recent IDC market research survey, which found that while developers have some level of confidence in the security of code generated by AI coding tools, the majority encounter vulnerabilities in that code on a regular basis. As AI becomes more integrated into commercial applications, the convergence of DevSecOps and MLOps processes becomes essential. This convergence would ensure that software development and model development go hand in hand, minimizing the risk of injecting malicious code into AI models.
The potential dangers of insecure AI were highlighted by Yoav Landman, CTO of JFrog, who revealed that there are already malicious models in popular community hubs like Hugging Face. JFrog’s new product allows for the hosting of Hugging Face models in a separate controlled proxy, where they can be scanned and governed with security and compliance policies. Additionally, JFrog’s DevSecOps platform enables organizations to block open source libraries and packages based on specific criteria before they are admitted into CI/CD pipelines. The release lifecycle management feature adds digital signatures to app packages, ensuring their integrity.
Industry experts predict that secure AI will become a top concern for enterprise IT as the use of AI continues to grow. Organizations recognize the importance of security in AI but often lack the knowledge and tools to implement effective security measures. The exploration of the nexus between open source AI, security, and AI application in the OpenSSF’s SOSS Summit is a crucial step towards addressing these challenges and ensuring a more secure AI landscape.
Overall, the convergence of AI and open source security brings both opportunities and risks. The discussions held at the SOSS Summit and the product update from JFrog demonstrate the industry’s commitment to addressing these challenges and securing AI models and open source packages. As AI continues to evolve, it is essential for organizations to prioritize security in order to harness the full potential of this technology while mitigating the associated risks.
