HomeMalware & ThreatsFederal Employees Continue Using Claude Despite Trump Orders

Federal Employees Continue Using Claude Despite Trump Orders

Published on

spot_img

Artificial Intelligence & Machine Learning,
Government,
Industry Specific

Agencies Prioritizing Tracking Use Over Enforcing Immediate Cutoffs

Federal Employees Continue Using Claude Despite Trump Orders
Image: Shutterstock

The ongoing tension between federal policy and technological adoption has come into sharper focus in recent weeks as federal staffers persist in using Anthropic’s artificial intelligence models. Despite explicit directives from President Donald Trump in late February, aimed at halting all utilization of this technology, many employees within federal agencies have continued to rely on these AI models. This comes amid an escalating dispute between the Department of Defense (DoD) and Anthropic regarding the restrictions placed on its military implementations.

In a decisive move, Trump expressed his intentions through a post on his social media platform where he instructed, “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” Furthermore, he provided a six-month window for the agencies to phase out their use of these products. Such language clearly showcased the administration’s serious intentions to cut ties with the technology, emphasizing ongoing national security concerns related to AI deployments in sensitive roles.

However, current and former federal employees have conveyed to industry sources that the directive did not incite any immediate or synchronized action to cease usage of the technology. These employees have reported that internal communications in the weeks following Trump’s announcement were more focused on determining the extent of AI utilization rather than enforcing a sudden cessation. It appears that the operational realities within federal civilian agencies are lagging behind the administration’s assertive stance.

Staff members from notable departments, including State and Treasury, have indicated that they continue to employ Anthropic’s popular Claude model. This persistence occurs as these agencies simultaneously seek to integrate a version of OpenAI’s ChatGPT into their systems. Additionally, there are ongoing aims to test Anthropic’s advanced Mythos system, designed to autonomously identify and rectify software vulnerabilities. Reports from Politico have revealed that the Department of Commerce’s Center for AI Standards and Innovation is proactively evaluating the capabilities of the Mythos system, casting further light on the complex landscape of AI integration within government operations.

The juxtaposition of civilian agencies’ ongoing use of Claude with the administration’s staunch position against Anthropic raises questions about the agency’s approach to risk management. Notably, the Pentagon has categorized Anthropic as a supply chain risk, expressing serious concerns that the firm’s control over its models once they are deployed could compromise the integrity and reliability of systems crucial for national security. This positioning comes after a formal designation by the Pentagon, which has struggled with administrative trust issues regarding AI technologies.

In an intriguing twist, a federal appeals court in Washington recently permitted the Pentagon to proceed with plans to remove Anthropic’s technologies from military systems. Despite this judicial support, parts of the legislative initiative face challenges amid ongoing litigation, effectively isolating Anthropic from participating in Pentagon projects for the foreseeable future.

Despite the administration’s definitive stance toward Anthropic, the situation within civilian agencies appears to be fraught with complexities. Officials currently grapple with understanding the actual extent of Anthropic tool usage across government departments, and are contemplating potential alternative products should a policy shift necessitate their removal.

According to various accounts from staffers, the internal communication following Trump’s directive has predominantly revolved around obtaining a clear understanding of how deeply entrenched Claude is within different offices and the various roles it plays in daily operations. The lack of formal communication regarding a phase-out timeline for the usage of Claude suggests a hesitance to dismantle established workflows that have already integrated AI into critical functions, such as drafting, coding, and data analysis.

Currently, the persistence of Anthropic’s tools within agencies like State and Treasury indicates a more profound entrenchment of AI technologies in everyday work processes than governmental policies may suggest. As of yet, neither department has provided comments regarding their continued usage of Claude, and attempts to seek clarification from the White House have gone unanswered.

Source link

Latest articles

Curity Aims to Transform IAM with Runtime Authorization for AI Agents

Innovative Solutions in Identity Access Management: The Rise of Runtime Enforcement Traditional Identity and Access...

Inside the SOC that Secured the RSAC 2026 Conference

In the bustling RSAC 2026 Conference expo hall, a subtle hum of activity resonates...

ENISA Aims for Premier Status in CVE Program

ENISA Strengthens Ties with CVE Program: A Strategic Move in Cybersecurity Collaboration The European Union...

More like this

Curity Aims to Transform IAM with Runtime Authorization for AI Agents

Innovative Solutions in Identity Access Management: The Rise of Runtime Enforcement Traditional Identity and Access...

Inside the SOC that Secured the RSAC 2026 Conference

In the bustling RSAC 2026 Conference expo hall, a subtle hum of activity resonates...