In a recent statement, a senior FBI official warned about the malicious use of artificial intelligence (AI) for various criminal activities. The official highlighted the growing threat posed by AI in obtaining explosives, conducting sextortion schemes, and spreading malware via deceptive websites.
According to the FBI, AI has significantly lowered technical barriers, allowing individuals with limited technical expertise to write malicious code and engage in low-level cyber activities. At the same time, more sophisticated actors can leverage AI in the development of novel attacks, enabling convincing delivery options and effective social engineering tactics.
One concerning development highlighted by the FBI is the proliferation of fake websites generated using AI technology. These websites, infused with malware, feature engaging content and multimedia designed to trick unsuspecting users. Some of these fake websites have amassed millions of followers, creating a significant amount of user engagement.
To combat this issue, the FBI is working with its partners to authenticate multimedia content and reliably determine what is synthetically generated. Additionally, the bureau is notifying hosting providers about any illegal activity that may be taking place on their platforms.
The democratization of AI has also allowed criminal actors to develop their own AI models at little to no cost, without the safeguards implemented by larger companies and corporations. Online AI open-source tools are readily available for use in traditional criminal schemes such as defrauding the elderly, ransom requests, or bypassing bank security measures.
Hackers with technical skills have also modified or developed open-source AI models to suit their specific criminal needs. Moreover, threat actors have explored AI models available on the dark web, which provide capabilities beyond those offered by large legitimate companies.
In addition to cyber-related crimes, AI has been exploited in the realm of sextortion. Criminals have used AI technology to create deepfake sexually explicit content, which is circulated on social media forums or pornographic websites. This content can be used to harass and extort victims, especially children.
Furthermore, the official warned of the misuse of AI for the production of dangerous chemical or biological substances. Criminal terrorists have turned to AI models to simplify the creation of explosives and increase their potency. Some criminals have successfully elicited instructions for creating explosives, prompting the FBI to collaborate with AI firms to prevent the release of such sensitive information.
Regarding nation-state actors, China has been particularly aggressive in its efforts to steal American AI technology and data. The country aims to enhance its own AI programs and gain a competitive edge in the global AI landscape. China’s targets for intellectual property theft include U.S. companies, universities, and government research facilities. The stolen IP often includes AI algorithms, data expertise, and computing infrastructure.
The FBI official stressed that U.S. talent plays a crucial role in the AI supply chain, making it a prime target for adversaries. The United States is internationally recognized for the quality of its research and development in AI. To transfer cutting-edge AI research and development for military and civilian programs, nation-states like China employ diverse means, including nontraditional collectors and legal inbound foreign investment.
Overall, the FBI’s warning sheds light on the growing challenge of AI misuse by both criminals and nation-state actors. As AI continues to advance, it is crucial for governments, law enforcement agencies, and technology companies to collaborate in implementing robust security measures to protect against these emerging threats.

