CyberSecurity SEE

AI Risk Database Addresses Risks in the AI Supply Chain

AI Risk Database Addresses Risks in the AI Supply Chain

An emerging free tool that analyzes artificial intelligence (AI) models for risk has the potential to become a mainstream part of cybersecurity teams’ toolboxes. The tool, known as the AI Risk Database, was created by the AI risk experts at Robust Intelligence and has recently been enhanced with new features. Today, it has been open sourced on GitHub, thanks to new partnership agreements with MITRE and Indiana University.

The AI Risk Database aims to assist the security community in discovering and reporting security vulnerabilities found in public machine learning (ML) models. It also tracks other factors that threaten the reliability and resilience of AI systems, such as brittleness, ethical problems, and AI bias. Hyrum Anderson, a distinguished ML engineer at Robust Intelligence and co-creator of the database, envisions it becoming the “VirusTotal for AI,” referring to the popular tool used to detect and analyze malware.

The tool is being developed to address a potential supply chain problem in the world of AI systems. Similar to other parts of the software supply chain, AI systems rely on various open source components. However, AI systems introduce an added layer of complexity by depending on open source ML models and open source data sets for training. This makes the impact of a flaw in a single model ripple across many AI systems. Anderson states, “AI supply chain security is going to be a huge issue for code, models, and data.”

To further enhance the AI Risk Database, it has incorporated a new dependency graph feature developed by researchers at the Indiana University Kelley School of Business Data Science and Artificial Intelligence Lab (DSAIL). This feature enables the scanning of GitHub repositories used to create models to identify publicly reported flaws that exist upstream of the delivered model artifact.

The partnership with MITRE will also strengthen the vulnerability research, classification, and risk scoring capabilities of the AI Risk Database. By closely aligning with the MITRE ATLAS framework, the database will be able to analyze data within the framework, which includes a list of adversary tactics and techniques based on real-world attack observations and AI red teaming. This collaboration aims to inform risk assessment and mitigation priorities for organizations globally.

Douglas Robbins, MITRE’s vice president of engineering and prototyping, emphasizes the importance of the collaboration and the release of the AI Risk Database. He believes it will allow organizations to understand the risks and vulnerabilities associated with deploying AI-enabled systems more effectively. Robbins states, “As the latest open-source tool under MITRE ATLAS, this capability will continue to inform risk assessment and mitigation priorities for organizations around the globe.”

To showcase the enhanced capabilities of the AI Risk Database, the collaborative team from Robust Intelligence, MITRE, and Indiana University will be conducting demonstrations at Black Hat Arsenal. Hyrum Anderson, Christina Liaghati (lead for MITRE ATLAS and AI strategy), and Sagar Samtani (director of Kelley’s DSAIL at Indiana University) will demonstrate the database’s capabilities during sessions scheduled for today and tomorrow.

As AI becomes increasingly integrated into various industries, the need for robust cybersecurity measures becomes paramount. The AI Risk Database, with its ability to identify and report security vulnerabilities and other threat factors in AI models, has the potential to become an essential tool for cybersecurity teams. By collaborating with renowned institutions like MITRE and Indiana University, the database is expected to evolve to meet the challenges posed by the complex AI supply chain and to enhance risk assessment and mitigation efforts for organizations worldwide.

Source link

Exit mobile version