CyberSecurity SEE

DHS Debuts Secure AI Framework for Critical Infrastructure

DHS Debuts Secure AI Framework for Critical Infrastructure

The US Department of Homeland Security (DHS) has recently released a comprehensive set of recommendations aimed at fostering the secure development and deployment of artificial intelligence (AI) within critical infrastructure. These guidelines encompass a wide range of stakeholders in the AI supply chain, extending from cloud and compute infrastructure providers to AI developers, as well as critical infrastructure owners and operators. Furthermore, the recommendations also extend to civil society and public-sector organizations involved in the AI ecosystem.

Outlined in the document titled “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,” these voluntary recommendations delineate the roles and responsibilities across five key areas crucial for the secure utilization of AI. These areas include securing environments, driving responsible model and system design, implementing data governance, ensuring safe and secure deployment, and monitoring performance and impact. Additionally, the framework offers technical and process recommendations aimed at enhancing the safety, security, and trustworthiness of AI systems.

According to the DHS, AI is already being leveraged across various sectors for resilience and risk mitigation purposes. Examples cited include the use of AI applications for earthquake detection, stabilizing power grids, and optimizing mail sorting operations. In light of these applications, the DHS framework seeks to address the specific responsibilities of each role involved in the AI ecosystem.

For cloud and compute infrastructure providers, the framework emphasizes the importance of rigorously vetting hardware and software supply chains, implementing robust access management protocols, and safeguarding the physical security of data centers supporting AI systems. Recommendations also touch upon the need to support downstream customers and processes by actively monitoring for anomalous activities and establishing clear reporting mechanisms for suspicious or harmful activities.

AI developers are encouraged to adopt a secure by design approach, evaluate potentially dangerous aspects of AI models, and ensure alignment with human-centric values. Furthermore, developers are urged to implement stringent privacy practices, conduct comprehensive evaluations to detect biases, failure modes, and vulnerabilities, and facilitate independent assessments for models presenting heightened risks to critical infrastructure systems.

Critical infrastructure owners and operators are advised to deploy AI systems securely, maintain strong cybersecurity practices addressing AI-related risks, protect customer data during AI product refinement, and provide transparent information regarding AI usage to deliver public benefits.

Civil society entities, including universities, research institutions, and consumer advocates, are encouraged to engage in standards development activities related to AI safety and security, alongside government and industry stakeholders. Moreover, research efforts focused on AI evaluations within critical infrastructure scenarios are highlighted as consequential.

Public sector entities, encompassing federal, state, local, tribal, and territorial governments, are urged to play a pivotal role in advancing standards of practice for AI safety and security through legislative and regulatory actions.

In response to the release of the framework, DHS Secretary Alejandro N. Mayorkas emphasized the significance of widespread adoption in enhancing the safety and security of essential services such as clean water delivery, power consistency, and internet access.

The DHS framework proposes a flexible model of shared and separate responsibilities to enable the safe and secure use of AI in critical infrastructure. It also leverages existing risk frameworks to help entities assess the potential risks associated with using AI in specific systems or applications that could lead to detrimental outcomes.

Mayorkas emphasized the dynamic nature of the framework, describing it as a “living document” intended to evolve in tandem with industry developments. This proactive approach underscores the commitment to ensuring the ongoing efficacy of AI safeguards within critical infrastructure applications.

Source link

Exit mobile version