Microsoft took legal action against a global cybercrime network involved in creating illicit AI deepfakes of celebrities. The company filed a lawsuit in December targeting a group known as Storm-2139, which was accused of bypassing safety guardrails of generative AI services, including Microsoft’s, to produce offensive and harmful content.
Following the legal action, Microsoft was able to obtain a temporary restraining order and preliminary injunction, allowing them to seize a crucial website used by the cybercrime group. This move effectively disrupted the group’s operations, leading to chaos among its members. Steven Masada, assistant general counsel for Microsoft DCU, revealed that the seizure of the website and the legal filings caused panic among the group members, resulting in internal discord and blame-shifting.
In their amended complaint, Microsoft named four individuals associated with the cybercrime network: Arian Yadegarnia from Iran, Alan Krysiak from the United Kingdom, Ricky Yuen from Hong Kong, and Phát Phùng Tấn from Vietnam. The company alleged that the group had misused Azure OpenAI service guardrails by using stolen Azure OpenAI API keys in conjunction with their software, de3u, to generate Dall-E model images.
The defendants’ de3u application was designed to communicate with Azure computers using undocumented Microsoft network APIs to mimic legitimate Azure OpenAPI Service API requests. By using stolen API keys and other authenticating information, the software allowed users to bypass technological controls meant to prevent alteration of certain Azure OpenAPI Service API request parameters.
Microsoft sought legal recourse and requested the Eastern District of Virginia court to deem the defendants’ actions as willful and malicious. The company also asked for the infrastructure of the website to be secured and isolated, along with seeking damages to be determined at trial.
As part of their ongoing efforts to combat the misuse of AI technology, Microsoft emphasized the importance of developing robust guardrails and safety systems aligned with responsible AI principles. These measures include content filtering and operational monitoring to prevent similar incidents in the future.
In response to the situation, a Microsoft spokesperson highlighted the company’s commitment to enhancing security measures and shared resources on how Microsoft detects and mitigates evolving threats against AI guardrails. By prioritizing responsible AI practices, Microsoft aims to safeguard against potential misuse of AI technology and protect users from harmful content generated by cybercriminals.
The legal action taken by Microsoft underscores the company’s commitment to combating cybercrime and protecting individuals from the harmful effects of illicit AI deepfakes. By targeting the perpetrators behind these activities, Microsoft sends a strong message that such behavior will not be tolerated, and they are willing to take decisive action to disrupt criminal networks engaged in malicious activities.

