The National Institute of Standards and Technology (NIST) has formed a new working group to address the security concerns and risks associated with generative artificial intelligence (AI). Despite the continued release of AI products and features by security companies, researchers have consistently cautioned against the potential security holes and dangers that AI poses. In an effort to provide guidance on implementing generative AI more safely, NIST aims to develop strategies and frameworks to mitigate potential risks.
NIST’s initiative comes after the release of the AI Risk Management Framework (AI RMF 1.0) in January and the introduction of the Trustworthy and Responsible AI Resource Center in March. The Public Working Group on Generative AI, launched on June 22, aims to utilize the existing framework to address the implementation of generative AI in various systems and applications. The group will initially focus on developing a profile for AI use cases, followed by testing generative AI and evaluating its potential to address global issues such as health, climate change, and environmental concerns.
Generative AI has recently gained considerable attention and interest due to its experimental nature, potential cybersecurity risks, and significant business implications. The launch of ChatGPT in November, in particular, brought generative AI to the forefront of public awareness. NIST recognizes the importance of engaging with the developer and security community to gain valuable insights and perspectives. To this end, NIST plans to participate in the AI Village at DEF CON 2023 in Las Vegas on August 11.
NIST’s generative AI working group provides valuable resources and information on its website, including video conversations with industry experts. Additionally, the National Artificial Intelligence Advisory Committee recently released its Year 1 Report, which offers insights into the current landscape and future prospects of artificial intelligence.
The formation of this working group reflects the growing recognition of the security concerns associated with AI and the need for effective risk management. By actively engaging with industry experts and the developer community, NIST aims to develop strategies and guidelines that ensure the safe and responsible implementation of generative AI technology.
As the use of AI continues to expand across various industries and sectors, it is crucial to prioritize security and establish robust frameworks to address potential risks. NIST’s efforts in forming the Public Working Group on Generative AI demonstrate their commitment to fostering a secure and responsible AI ecosystem. Through collaboration and ongoing research, NIST aims to promote the adoption of AI while minimizing potential security vulnerabilities.
To stay updated on the latest developments in cybersecurity, including newly-discovered vulnerabilities, data breach information, and emerging trends, interested individuals can subscribe to NIST’s newsletter. The newsletter provides regular updates and is delivered directly to subscribers’ email inboxes. By staying informed, individuals can better understand the evolving cybersecurity landscape and take necessary measures to protect their digital assets and information.

