HomeCII/OTThe Challenging Realities of Establishing AI Risk Policy

The Challenging Realities of Establishing AI Risk Policy

Published on

spot_img

Artificial intelligence (AI) risk management has been a growing concern in the cybersecurity community for years, but finally, Chief Information Security Officers (CISOs) are starting to take notice. This shift in attention was evident at the recent Black Hat USA conference in Las Vegas, where discussions about the multilayered risks surrounding AI took center stage.

Hyrum Anderson, a renowned AI security researcher and co-author of the book “Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them,” believes that this increased awareness is a positive development. Just a year ago, he and his co-author Ram Shankar Siva Kumar were struggling to get CISOs to pay attention to AI risks. Many dismissed their concerns as science fiction and focused on other cybersecurity threats. However, Anderson and Siva Kumar now feel encouraged by the level of interest and excitement surrounding AI security at events like RSA Conference and Black Hat.

The keynotes and sessions at this year’s Black Hat conference tackled various aspects of AI risk. Jeff Moss, the founder of Black Hat and DEF CON, and Maria Markstedter of Azeria Labs explored the future challenges of AI risk, including technical, business, and policy issues. Researchers presented findings on emerging threats in AI systems, such as vulnerabilities in generative AI, AI-enhanced social engineering attacks, and the poisoning of AI training data to undermine machine learning models. These discoveries highlight the vulnerability of AI systems and the urgent need for risk management.

One particularly alarming study presented at the conference by Will Pearce, AI red team lead for Nvidia, revealed that it’s possible to manipulate training data for just $60, undermining the reliability of AI models. The ease with which this can be done raises concerns about the security of AI systems and the potential for malicious actors to exploit them.

Hyrum Anderson himself is at the forefront of AI risk management. At the Black Hat Arsenal, he unveiled the newly open-sourced AI Risk Database, a tool developed in collaboration with MITRE and Indiana University. This database aims to assist in discovering and quantifying vulnerabilities in AI systems, helping organizations better understand and mitigate their AI risks.

While technical challenges are a prominent focus at Black Hat, the question of AI risk policy also looms large. Siva Kumar, Anderson’s co-author, emphasizes the complexity of developing unified standards for AI risk management. Unlike electronics, where safety ratings like Underwriter Labs exist, AI systems are far too complex to adhere to a single standard. Furthermore, implementing these policies is challenging because they are often vague and difficult for engineers to interpret.

The development of AI risk policies requires careful consideration of technical trade-offs. Siva Kumar highlights the tension between security measures and other desired properties of AI, such as robustness and bias. Increasing robustness may impact the bias of the system, and there is no one-size-fits-all solution. Organizations must navigate competing interests and prioritize the risks that matter most to them.

Enforcing AI risk policies requires a shift in organizational culture. It cannot be a mere compliance exercise. The enterprise culture needs to become more collaborative and decisive in making AI risk decisions. Without this cultural shift, any policies and regulations will be mere paperwork exercises.

Overall, the discussions at Black Hat USA highlighted the growing consciousness of AI risk among cybersecurity professionals. The conference delved into the technical challenges, policy considerations, and organizational culture changes needed to effectively manage AI risks. While there is still much work to be done, the fact that these conversations are happening is a positive step towards securing the future of AI.

Source link

Latest articles

Anubis Ransomware Now Hitting Android and Windows Devices

 A sophisticated new ransomware threat has emerged from the cybercriminal underground, presenting a...

Real Enough to Fool You: The Evolution of Deepfakes

Not long ago, deepfakes were digital curiosities – convincing to some, glitchy to...

What Happened and Why It Matters

In June 2025, Albania once again found itself under a digital siege—this time,...

Why IT Leaders Must Rethink Backup in the Age of Ransomware

 With IT outages and disruptions escalating, IT teams are shifting their focus beyond...

More like this

Anubis Ransomware Now Hitting Android and Windows Devices

 A sophisticated new ransomware threat has emerged from the cybercriminal underground, presenting a...

Real Enough to Fool You: The Evolution of Deepfakes

Not long ago, deepfakes were digital curiosities – convincing to some, glitchy to...

What Happened and Why It Matters

In June 2025, Albania once again found itself under a digital siege—this time,...