A vulnerability in Microsoft’s Copilot Studio tool has been exploited by researchers, allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment — with potential impact across multiple tenants. Discovered by Tenable researchers, the server-side request forgery (SSRF) flaw in the chatbot creation tool enabled access to Microsoft’s internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances. Tracked by Microsoft as CVE-2024-38206, the flaw permits an authenticated attacker to bypass SSRF protection in Copilot Studio, leaking sensitive cloud-based information over a network.
“An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way,” explained Tenable security researcher Evan Grant. The researchers demonstrated how HTTP requests could be used to access cloud data and services from multiple tenants. While no immediate cross-tenant information was accessible, the shared infrastructure among tenants meant that any impact on that infrastructure could potentially affect multiple customers. In addition, the exploit allowed access to other internal hosts unrestricted on the local subnet to which the instance belonged.
Upon receiving notification of the flaw from Tenable, Microsoft acted swiftly to fully mitigate the vulnerability with no action required on the part of Copilot Studio users. The flaw was addressed in a security advisory by the company.
Microsoft introduced Copilot Studio as a drag-and-drop tool to create custom AI assistants, or chatbots, leveraging data from the Microsoft 365 environment or the Power Platform. Recently flagged as “way overpermissioned” by security researcher Michael Bargury, the tool was found to have security issues allowing for the creation of flawed chatbots. The SSRF flaw in Copilot Studio was discovered by Tenable researchers while investigating vulnerabilities in Azure AI Studio and Azure ML Studio APIs, leading them to explore the potential exploitability of Copilot Studio.
Creating a new Copilot involves defining Topics, specifying key phrases that users can utter to prompt a specific response or action from the AI. This includes the ability to make HTTP requests, commonly used in data analysis or machine learning applications but posing a security risk due to potential vulnerability. The researchers managed to gain access to cloud resources and internal cloud services, such as Azure services and a Cosmos DB instance, by combining redirects and SSRF bypasses, subsequently retrieving managed identity access tokens and gaining read/write access to the database.
While the extent of the flaw’s exploitability remains inconclusive, immediate mitigation was necessary. The SSRF flaw in Copilot Studio serves as a cautionary tale for users, highlighting the potential for attackers to abuse the HTTP request feature to escalate their access to cloud data and resources. Grant warned of the possibility for attackers to point requests to sensitive internal resources, revealing potentially sensitive information unknowingly.
Moving forward, users of Copilot Studio should remain vigilant and mindful of potential security risks associated with the tool’s features. Microsoft’s prompt response to the vulnerability underscores the importance of proactive security measures in mitigating risks and ensuring the protection of sensitive information within cloud environments.
