HomeRisk ManagementsRSAC: Researchers Share Lessons from the World's First AISIRT

RSAC: Researchers Share Lessons from the World’s First AISIRT

Published on

spot_img

Researchers at Carnegie Mellon University have taken a leading role in addressing the growing use of artificial intelligence (AI) in critical sectors such as national security and infrastructure. With the establishment of the AI Security Incident Response Team (AISIRT), the university aims to define incident response strategies for AI and machine learning systems and coordinate community response efforts.

The need for AISIRT became apparent as data showed an increase in AI-powered attacks on AI systems. Lauren McIlvenny and Gregory Touhill, the minds behind AISIRT, highlighted the risks associated with AI technologies, such as attacks on generative AI tools and vulnerabilities in GPU kernels. The collaboration between Carnegie Mellon University and CERT Division’s partner network enabled the operational launch of AISIRT in August 2023, with a full operational status achieved by October the same year.

AISIRT functions similarly to a traditional Computer Security Incident Response Team (CSIRT) and comprises four key components: AI incident response, vulnerability discovery tools, vulnerability management framework, and situational awareness services. The team includes various stakeholders like system administrators, network engineers, AI/ML practitioners, and researchers from trusted industry and academic partners.

Looking ahead, McIlvenny and Touhill envision AISIRT as a central hub for sharing best practices, standards, and guidelines related to AI in defense and national security. The duo plans to establish a cross-disciplinary AI community of practice involving academia, industry, government organizations, and legislative bodies to enhance AI security frameworks.

After six months of operation, McIlvenny and Touhill reflected on the lessons learned from running AISIRT. They emphasized the interconnectedness of AI and cybersecurity vulnerabilities, the need for tailored training for AI developers, and the importance of ongoing evolution in cybersecurity processes to support AI systems effectively.

While acknowledging the infancy of AI security, the researchers highlighted lingering questions surrounding emerging regulatory regimes, privacy impacts, threats to intellectual property, and governance and oversight in AI deployment. As organizations continue to grapple with securing AI systems, Touhill encouraged stakeholders to share their experiences and insights to collectively address the challenges of AI security.

In conclusion, the establishment of AISIRT marks a significant step towards enhancing AI security response capabilities in sensitive sectors. Through collaboration, research, and shared best practices, the team at Carnegie Mellon University aims to stay ahead of emerging threats and safeguard critical infrastructure against AI-related risks.

Source link

Latest articles

CERT-EU Reports on EC Hack Impacting EU Data

The European Union's Cybersecurity Service has raised alarm bells by linking a major breach...

How AI Agents Are Transforming the Insider Risk Threat Model

Proofpoint's CEO Discusses the Urgent Need for AI Integrity Frameworks In the rapidly advancing realm...

Claude Code Remains Vulnerable to an Attack That Anthropic Has Already Addressed

Security Concerns Arise Following Claude Code Source Leak The recent leak of the Claude Code’s...

Mercor Breach Connected to LiteLLM Supply Chain Attack

AI Dependency Attack Reportedly Exposes Data...

More like this

CERT-EU Reports on EC Hack Impacting EU Data

The European Union's Cybersecurity Service has raised alarm bells by linking a major breach...

How AI Agents Are Transforming the Insider Risk Threat Model

Proofpoint's CEO Discusses the Urgent Need for AI Integrity Frameworks In the rapidly advancing realm...

Claude Code Remains Vulnerable to an Attack That Anthropic Has Already Addressed

Security Concerns Arise Following Claude Code Source Leak The recent leak of the Claude Code’s...