HomeCyber BalkansMicrosoft Azure's Russinovich discusses major threats posed by generative AI

Microsoft Azure’s Russinovich discusses major threats posed by generative AI

Published on

spot_img

In a recent presentation, the Microsoft Azure CTO highlighted the potential dangers of data poisoning and backdoors in AI models. By manipulating just 1% of the data set, such as through a backdoor, an attacker could cause a model to incorrectly classify items or even generate malware. This was demonstrated by adding digital noise to a picture, resulting in a panda being misclassified as a monkey.

It is important to note that not all backdoors are malicious. Some may be used to add unique questions or markers to a model to verify its authenticity and integrity. These markers could be unusual questions that are unlikely to be asked by real users, allowing for better verification of the model’s behavior.

One of the most concerning AI attacks involves prompt injection techniques, which allow an attacker to influence dialog beyond just the current conversation with a user. This can lead to leaking private data or what Russinovich refers to as a “cross prompt injection attack,” reminiscent of web cross site scripting exploits. To mitigate these risks, it is crucial to isolate users, sessions, and content from each other.

At the top of the threat stack, according to Microsoft, are various user-related threats such as disclosing sensitive data, using jailbreaking techniques to gain control over AI models, and forcing third-party apps and plugins into leaking data or circumventing content restrictions. Russinovich recently detailed an attack called Crescendo, which can bypass content safety filters and manipulate AI models to generate malicious content through carefully crafted prompts. The demonstration showed how ChatGPT could be used to reveal dangerous information like the ingredients of a Molotov Cocktail, despite initially denying such content.

Overall, these insights highlight the importance of safeguarding AI models against data poisoning, backdoors, and malicious attacks. By understanding the vulnerabilities in AI systems, developers can better protect against potential threats and uphold the integrity of their models. As AI continues to advance, it is crucial to stay vigilant and proactive in addressing security concerns to ensure the safe and ethical use of artificial intelligence technology.

Source link

Latest articles

UNC1069 Hits npm via Axios Maintainer

In a significant cybersecurity incident, the maintainer of the popular Axios npm package, Jason...

Anthropic Terminates Claude Subscription Access for Third-Party Tools Such as OpenClaw

Anthropic Implements Major Restrictions on Claude Subscription Services In a significant move, Anthropic has announced...

Handala Alleges Breach of Israeli PSK

Iranian Hackers Breach Israeli Defense Contractor, PSK Wind Technologies: Implications for Regional Security In significant...

LinkedIn’s Hidden Code Secretly Scans Users’ Computers for Installed Software

Allegations of Massive Surveillance Operations by LinkedIn Revealed in New Investigation A recent investigation conducted...

More like this

UNC1069 Hits npm via Axios Maintainer

In a significant cybersecurity incident, the maintainer of the popular Axios npm package, Jason...

Anthropic Terminates Claude Subscription Access for Third-Party Tools Such as OpenClaw

Anthropic Implements Major Restrictions on Claude Subscription Services In a significant move, Anthropic has announced...

Handala Alleges Breach of Israeli PSK

Iranian Hackers Breach Israeli Defense Contractor, PSK Wind Technologies: Implications for Regional Security In significant...