At the upcoming Black Hat 2024 conference in Las Vegas, experts in the field of artificial intelligence (AI) will be gathering to shed light on the critical importance of AI safety. Led by Nathan Hamiel, head of the Fundamental and Applied Research team at Kudelski Security, the panel discussion aims to debunk misconceptions and emphasize the responsibilities that organizations hold when it comes to ensuring the safety of AI technology.
Hamiel emphasizes that AI safety is not a concern limited to academics or government bodies, but rather a crucial issue that should be on the radar of all security professionals. With the rapid integration of AI into various systems and its increasing role in decision-making processes, the focus on safety becomes paramount.
The intersection of AI safety and security will be a focal point of the panel discussion. According to Hamiel, security is an intrinsic component of safety. An insecure AI product poses risks and is unsafe to use, underscoring the need for security professionals to step up and take responsibility for ensuring the safety of these systems.
One of the key areas of discussion at the panel will be the various harms that can arise from AI deployments. Hamiel introduces the SPAR framework, which categorizes these harms as secure, private, aligned, and reliable, providing a structured approach to assessing the safety of AI products. Addressing technical harms is a precursor to addressing human harms, highlighting the need to consider specific use cases of AI technologies and the potential consequences of failure in those contexts.
Organizations play a pivotal role in ensuring AI safety, and Hamiel stresses the importance of taking ownership of the safety of AI applications they develop and deploy. This includes understanding and mitigating potential risks associated with AI use, rather than shifting blame to external model providers.
The panel at Black Hat will bring together a diverse group of experts from the private sector and government to provide attendees with a comprehensive understanding of the challenges and responsibilities related to AI safety. By raising awareness and offering actionable insights, the discussion aims to equip participants with the knowledge needed to integrate safety considerations into their security strategies.
As AI continues to advance and become more ingrained in daily life, conversations like these are crucial to ensure that AI deployments are both safe and secure. Hamiel anticipates that discussions around AI safety will only gain more attention in the coming years and is pleased that Black Hat provides a platform for such important conversations to take place. The commitment to AI safety is not just a trend but a necessity in the rapidly evolving landscape of technology and innovation.

