HomeCyber BalkansWiz researchers breached leading AI infrastructure providers

Wiz researchers breached leading AI infrastructure providers

Published on

spot_img

At the recent Black Hat USA 2024 conference, Wiz security researchers warned about the vulnerabilities of AI infrastructure providers like Hugging Face and Replicate. The researchers, Hillai Ben-Sasson and Sagi Tzadik, conducted a year-long study on three major AI infrastructure providers, including SAP AI Core. Their goal was to assess the security of these platforms and determine the risks associated with storing valuable data on them.

During the conference, Ben-Sasson and Tzadik demonstrated how they were able to compromise the platforms by uploading malicious models and utilizing container escape techniques to navigate through the service. They specifically highlighted a recent attack on Hugging Face, where suspicious activity was detected in their Spaces platform, leading to a key and token reset.

The researchers were able to access sensitive customer data on all three platforms, including confidential AI artifacts like models, data sets, and code. While the vulnerabilities were reported to the service providers, the researchers emphasized the importance of improving isolation and sandboxing standards to prevent future attacks.

In addition to addressing the specific vulnerabilities in the AI platforms, the researchers also discussed broader concerns about the rapid adoption of AI technology. They highlighted how security is often overlooked in the rush to adopt AI tools and services. Many organizations testing AI models do not have a full understanding of the security risks involved, potentially exposing themselves to data breaches and other threats.

Chris Wysopal, CTO and co-founder of Veracode, also addressed similar concerns during a Black Hat session, noting how developers are increasingly using large language models for coding without prioritizing security. He raised concerns about vulnerabilities in generative AI tools and the potential for data set poisoning.

Overall, the researchers stressed the need for organizations to prioritize security when using AI tools and services. By conducting thorough security validation and implementing robust defense measures, companies can better protect their data and prevent unauthorized access. The collaboration between security researchers and AI service providers is essential to address these vulnerabilities and ensure the safety of sensitive information in the increasingly interconnected world of AI technology.

Source link

Latest articles

OT Cybersecurity Excluded by Frontier Labs

Artificial Intelligence & Machine Learning, Attack Surface...

Stopping AiTM Attacks: Effective Defenses After Authentication Success

Rethinking Phishing: The Rise of AiTM Attacks and Effective Defensive Strategies In the evolving landscape...

Ransomware Turf War: 0APT and KryBit Groups Clash

Ransomware Groups in Disarray Following Data Leak Conflict In a dramatic turn of events within...

Germany Involved in Potential Russian Signal Phishing Attack

Governments Alerted to Kremlin-Linked Social Engineering Attacks In a growing narrative surrounding cyber threats, the...

More like this

OT Cybersecurity Excluded by Frontier Labs

Artificial Intelligence & Machine Learning, Attack Surface...

Stopping AiTM Attacks: Effective Defenses After Authentication Success

Rethinking Phishing: The Rise of AiTM Attacks and Effective Defensive Strategies In the evolving landscape...

Ransomware Turf War: 0APT and KryBit Groups Clash

Ransomware Groups in Disarray Following Data Leak Conflict In a dramatic turn of events within...