CISOs are facing a rapidly changing landscape when it comes to managing AI risks, as highlighted by recent guidance documents released by the National Institute of Standards and Technology (NIST) in accordance with a 2023 White House executive order on AI safety and security. These resources are intended to help mitigate potential risks associated with AI technology, which encompass aspects such as validity, reliability, safety, security, accountability, transparency, explainability, interpretability, privacy, and fairness.
The release of these guidance documents and the revival of the Dioptra test platform by NIST signal a concerted effort to assist CISOs and security teams in navigating the complexities of AI risk management. Despite the significant resources and efforts dedicated to building AI risk models, there remains a lack of practical advice available to CISOs on how to effectively address and manage these risks.
One of the key challenges identified by industry experts is the evolving nature of AI technology and the unique risks it poses compared to traditional software and code. While CISOs have become well-versed in handling supply chain risks associated with conventional software, managing risks related to AI models presents a new set of challenges. Alon Schindel, VP of data and threat research at Wiz, emphasizes this point by highlighting the novelty of AI technology and its implications for risk management.
The White House executive order and subsequent guidance documents underscore the importance of proactive measures to ensure the safe, secure, and trustworthy development and use of AI technology. By providing CISOs with resources and tools to assess the trustworthiness of AI systems, NIST aims to empower security teams to effectively identify and address potential risks before they escalate.
In light of the ongoing advancements in AI technology and the increasing integration of AI models into various industries, it is essential for CISOs to stay abreast of the latest developments in AI risk management. The guidance provided by NIST and the US AI Safety Institute offers valuable insights into key areas of concern, such as bias management, privacy protection, and transparency in AI systems.
As the threat landscape continues to evolve and AI technology becomes more pervasive, CISOs must remain vigilant and proactive in their approach to managing AI risks. By leveraging the guidance and tools provided by NIST and other government agencies, security teams can enhance their ability to safeguard critical systems and data from potential AI-related threats.
In conclusion, the release of new guidance documents and resources by NIST underscores the urgent need for CISOs to prepare for a rapidly changing environment in AI risk management. By staying informed and proactive in addressing AI risks, security teams can effectively mitigate potential threats and ensure the safe and secure use of AI technology in their organizations.

