CyberSecurity SEE

Weaponizing Microsoft Copilot for Cyberattackers

Weaponizing Microsoft Copilot for Cyberattackers

During the Black Hat USA conference in Las Vegas, security researcher Michael Bargury highlighted the potential risks associated with Microsoft’s Copilot AI-based chatbots. Previously a senior security architect at Microsoft’s Azure Security CTO office, Bargury is now the co-founder and chief technology officer of Zenity.

Bargury demonstrated how threat actors could exploit Copilot’s vulnerabilities to search for data, exfiltrate information without generating logs, and socially engineer victims into visiting phishing sites, even without opening emails or clicking on links. The ease with which data breaches could occur using Copilot was a cause for concern among attendees at the conference.

In his presentation titled “Living off Microsoft Copilot,” Bargury showcased how attackers could manipulate Copilot’s security controls by employing prompt injections. This injection technique allows hackers to evade the bot’s defenses and gain unauthorized access to sensitive data. He emphasized that developers may inadvertently create chatbots capable of bypassing security policies and data loss prevention measures using Microsoft’s bot creation and management tool, Copilot Studio.

Expanding on his research, Bargury unveiled a new offensive security toolset on GitHub called the LOLCopilot module, which is designed specifically for Microsoft Copilot, Copilot Studio, and Power Platform. This tool serves as a red-team hacking instrument to showcase how attackers can leverage prompt injections to alter the behavior of chatbots.

During his demonstration, Bargury illustrated how prompt injections in Copilot are akin to remote code-execution (RCE) attacks. Although Copilots do not run code, they follow instructions, perform operations, and create compositions based on those actions. This vulnerability could potentially lead to unauthorized access to critical systems and data.

Microsoft has acknowledged the risks associated with prompt injections and released Prompt Shields, an API designed to detect direct and indirect prompt injection attacks. The company has also introduced other Azure tools, such as Groundedness Detection and Safety Evaluation, to fortify AI applications against potential threats.

Despite these security measures, Bargury emphasizes the need for more advanced tools to scan for “promptware,” undisclosed instructions, and untrusted data within AI models. He believes that current security mechanisms, like Microsoft Defender and Purview, lack the precision needed to detect prompt injections effectively.

While Bargury commends Microsoft for its efforts to address AI security risks, he continues to advocate for enhanced detection capabilities to mitigate potential threats. He acknowledges Microsoft’s commitment to strengthening security mechanisms within Copilot but stresses the importance of ongoing vigilance in the face of evolving cybersecurity challenges.

Source link

Exit mobile version