HomeRisk ManagementsRSAC: Researchers Share Lessons from the World's First AISIRT

RSAC: Researchers Share Lessons from the World’s First AISIRT

Published on

spot_img

Researchers at Carnegie Mellon University have taken a leading role in addressing the growing use of artificial intelligence (AI) in critical sectors such as national security and infrastructure. With the establishment of the AI Security Incident Response Team (AISIRT), the university aims to define incident response strategies for AI and machine learning systems and coordinate community response efforts.

The need for AISIRT became apparent as data showed an increase in AI-powered attacks on AI systems. Lauren McIlvenny and Gregory Touhill, the minds behind AISIRT, highlighted the risks associated with AI technologies, such as attacks on generative AI tools and vulnerabilities in GPU kernels. The collaboration between Carnegie Mellon University and CERT Division’s partner network enabled the operational launch of AISIRT in August 2023, with a full operational status achieved by October the same year.

AISIRT functions similarly to a traditional Computer Security Incident Response Team (CSIRT) and comprises four key components: AI incident response, vulnerability discovery tools, vulnerability management framework, and situational awareness services. The team includes various stakeholders like system administrators, network engineers, AI/ML practitioners, and researchers from trusted industry and academic partners.

Looking ahead, McIlvenny and Touhill envision AISIRT as a central hub for sharing best practices, standards, and guidelines related to AI in defense and national security. The duo plans to establish a cross-disciplinary AI community of practice involving academia, industry, government organizations, and legislative bodies to enhance AI security frameworks.

After six months of operation, McIlvenny and Touhill reflected on the lessons learned from running AISIRT. They emphasized the interconnectedness of AI and cybersecurity vulnerabilities, the need for tailored training for AI developers, and the importance of ongoing evolution in cybersecurity processes to support AI systems effectively.

While acknowledging the infancy of AI security, the researchers highlighted lingering questions surrounding emerging regulatory regimes, privacy impacts, threats to intellectual property, and governance and oversight in AI deployment. As organizations continue to grapple with securing AI systems, Touhill encouraged stakeholders to share their experiences and insights to collectively address the challenges of AI security.

In conclusion, the establishment of AISIRT marks a significant step towards enhancing AI security response capabilities in sensitive sectors. Through collaboration, research, and shared best practices, the team at Carnegie Mellon University aims to stay ahead of emerging threats and safeguard critical infrastructure against AI-related risks.

Source link

Latest articles

The Battle Behind the Screens

 As the world watches the escalating military conflict between Israel and Iran, another...

Can we ever fully secure autonomous industrial systems?

 In the rapidly evolving world of industrial IoT (IIoT), the integration of AI-driven...

The Hidden AI Threat to Your Software Supply Chain

AI-powered coding assistants like GitHub’s Copilot, Cursor AI and ChatGPT have swiftly transitioned...

Why Business Impact Should Lead the Security Conversation

 Security teams face growing demands with more tools, more data, and higher expectations...

More like this

The Battle Behind the Screens

 As the world watches the escalating military conflict between Israel and Iran, another...

Can we ever fully secure autonomous industrial systems?

 In the rapidly evolving world of industrial IoT (IIoT), the integration of AI-driven...

The Hidden AI Threat to Your Software Supply Chain

AI-powered coding assistants like GitHub’s Copilot, Cursor AI and ChatGPT have swiftly transitioned...