In a recent Help Net Security interview, the CEO of Protect AI, Ian Swanson, delved into the concept of “secure AI by design” and how organizations can ensure the safety and trustworthiness of their AI systems. By embracing frameworks like Machine Learning Security Operations (MLSecOps) and prioritizing transparency, companies can establish robust AI systems that are resilient to threats.
The idea of “secure AI by design” is gaining traction within the industry, with a focus on embedding security practices from the early stages of AI development. This approach involves addressing unique threats associated with AI systems, such as model serialization attacks and large language model jailbreaks. To kickstart the process, organizations can create a machine learning bill-of-materials (MLBOM) to catalog their ML models and environments and conduct continuous vulnerability scanning.
A critical aspect of building secure AI systems is training ML engineers and data scientists in secure coding practices, enforcing strict access controls, and steering clear of unsecured development environments like Jupyter Notebooks. By adopting a code-first approach and implementing red team testing, organizations can enhance visibility and accountability throughout the AI pipeline.
When designing secure AI systems, organizations should adhere to key principles outlined in frameworks provided by organizations like NCSC and MITRE. These principles emphasize transparency, auditability, adherence to privacy regulations like GDPR, thorough risk assessments, and securing training data. Practices such as red-team testing, threat modeling, and following a secure development lifecycle are also recommended to fend off emerging security risks.
As the adoption of AI systems continues to grow, new and emerging threats are on the horizon, including supply chain vulnerabilities, invisible attacks in machine learning models, and compromises in GenAI applications. To combat these evolving risks, organizations are advised to implement a robust MLSecOps strategy, integrate security early in the AI lifecycle, continuously scan for vulnerabilities, and educate their teams on AI security best practices.
In the public sector, where critical infrastructure and sensitive data are at stake, AI security must be approached with heightened vigilance. These organizations should follow stricter compliance frameworks, prioritize end-to-end encryption and access controls, and build AI models with transparency and auditability in mind. Regular incident response planning, red-team testing, and cross-agency collaboration are crucial for mitigating risks in the public sector.
Explainability and transparency are increasingly seen as vital components of AI security. Organizations can ensure their AI systems are both secure and explainable by adopting interpretable model development frameworks, utilizing Explainable AI (XAI) techniques, and implementing a MLSecOps approach for continuous monitoring. Collaboration between stakeholders and clear documentation of model training data and decision pathways are essential for building trust and transparency in AI systems.
By proactively addressing security concerns, fostering collaboration across teams, and staying abreast of emerging threats, organizations can build secure and trustworthy AI systems that meet the needs and expectations of stakeholders.
