Google has introduced the Secure AI Framework (SAIF), which aims to address the need for a common security framework across the public and private sectors for the safe implementation of AI models. The tech giant said that the SAIF is designed to help mitigate specific AI systems risks such as model theft, poisoning of training data and injection of malicious inputs. The launch of this framework comes as concerns rise around the risks that generative AI could bring, such as sharing sensitive business information with advanced self-learning algorithms or the use of these technologies by malicious actors to significantly enhance attacks.
The Open Worldwide Application Security Project (OWASP) recently published a list of the top 10 most critical vulnerabilities seen in large language model (LLM) applications that many generative AI chat interfaces are based upon, highlighting their potential impact, ease of exploitation, and prevalence. Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution.
Google said its SAIF is built on six AI security principles. These include expanding strong security foundations to the AI ecosystem, extending detection and response to bring AI into the threat universe, automating defenses to maintain pace with existing and new threats, harmonizing platform-level controls to ensure consistent security, adapting controls to adjust mitigations, and contextualizing AI system risks in surrounding business processes.
Google has also set out the steps it is taking and will take to advance the framework, including fostering industry support for SAIF with the announcement of key partners and contributors in the coming months, continued industry engagement to help develop the NIST AI Risk Management Framework, and ISO/IEC 42001 AI Management System Standard. It will also work directly with organizations, including customers and governments, to help them understand how to assess AI security risks and mitigate them.
In addition, Google will share insights from its leading threat intelligence teams like Mandiant and TAG on cyber activity involving AI systems, expand its bug hunters’ programs to incentivize research around AI safety and security, and continue to deliver secure AI offerings with partners like GitLab and Cohesity. Finally, Google will further develop new capabilities to help customers build secure systems.
The launch of Google’s SAIF is a significant step in ensuring that responsible actors safeguard the technology that supports AI advancements and adopts security measures as AI capabilities become increasingly integrated into products globally.
