The United States government has made a significant move by instituting a ban on federal agencies from using the services of Anthropic, a prominent domestic artificial intelligence company recognized for its AI model, Claude. The decision marks a historic moment, as it is the first time a U.S. company has been designated as a supply chain risk to national security, a classification previously reserved for foreign entities such as Huawei.
This unprecedented action was announced by President Donald Trump on February 28, 2026, via his platform, Truth Social. He directed all federal entities to “IMMEDIATELY CEASE” usage of Anthropic’s technology. This directive arrives at a critical juncture, given the growing reliance on AI technologies across various sectors.
Departments that heavily utilize Claude, including the Department of War (DoW), have been instructed to complete their phase-out of the AI within a six-month timeframe. In response to the announcement, Defense Secretary Pete Hegseth swiftly declared on X that Anthropic poses a national security risk. He underscored that no contractors or suppliers affiliated with the U.S. military are permitted to conduct any business with Anthropic, further complicating the company’s position in the government sector.
The impetus behind this ban stems from Anthropic’s refusal to grant the Pentagon full access to Claude. The company requested two important exceptions regarding the use of its AI: one prohibiting mass surveillance of U.S. citizens and the other disallowing the deployment of fully autonomous weapons. The Pentagon, however, insisted on unrestricted access for “all lawful purposes,” which led to negotiations that ultimately fell through. Following this breakdown in talks, Anthropic’s CEO, Dario Amodei, articulated concerns about the reliability of current AI models for fully autonomous weaponry and indicated that mass surveillance contradicts fundamental civil rights for Americans.
Previously, Anthropic had successfully integrated its AI into classified networks for the U.S. government, following a substantial $200 million contract with the Department of War that commenced in June 2024. However, after repeated failed negotiations, the Pentagon issued an ultimatum to Anthropic: comply with their demands or be blacklisted from future government contracts. The final offer from the Pentagon reportedly contained legal language that could potentially override previous safeguard agreements, leading Anthropic to question both the legality and the ethics of the conditions it was being asked to meet.
In light of this escalating tension, Anthropic has indicated its intention to challenge its designation as a supply chain risk in court. The company argues that the legal statute invoked (10 USC 3252) is not applicable to general business operations but rather to specific Department of War contracts. This legal challenge aims to clarify the boundaries of this unprecedented classification and what it means for U.S. companies.
Despite the government imposing restrictions on federal usage of Claude, individual users and contractors outside of the Department of War are still permitted to use the AI. Nonetheless, the broader implications of the ban on the tech industry are significant. Anthropic’s operations depend heavily on cloud services provided by major companies like Amazon, Microsoft, and Google, all of which maintain military contracts. Should these cloud service providers choose to sever ties with Anthropic due to the governmental restrictions, it could severely impact the company’s operational capabilities.
Legal experts have voiced concerns that this decision could set a troubling precedent by applying a foreign national security framework to a domestic business, raising questions about the government’s power to label companies in this manner. President Trump has further cautioned Anthropic about potential civil and criminal repercussions should it fail to cooperate during the designated transition period.
As the situation unfolds, Anthropic remains steadfast in its refusal to permit its AI to be utilized in either autonomous military capacities or for domestic surveillance, standing firmly behind its ethical stance. This ongoing conflict not only highlights the complexities of AI governance but also illustrates the delicate balance between national security and corporate autonomy in the rapidly evolving field of artificial intelligence.
