CyberSecurity SEE

Pentagon Memo Criticizes Anthropic for PR Campaign

Pentagon Memo Criticizes Anthropic for PR Campaign

Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development,
Standards, Regulations & Compliance

DOD Official: AI Firm Wanted ‘Approval Role in the Operational Decision Chain’

Pentagon Memo Criticizes Anthropic for PR Campaign
Image: Jeremy Christensen/Shutterstock

Recent internal memos from the Department of Defense (DoD) have surfaced, elucidating the rationale behind the decision to blacklist the artificial intelligence firm Anthropic. These documents indicate a profound concern regarding the reliability and control of the firm’s models for military applications. The memos have been instrumental in revealing the complexities surrounding the intricate dynamics of technology governance within the military.

The documents were made public following their submission to a San Francisco federal district court, providing a more comprehensive understanding of why the Pentagon considers Anthropic a “supply-chain risk.” Central to the controversy is Anthropic’s refusal to endorse certain government applications of its technology, alongside its ongoing public disputes with the DoD over these matters.

Defense officials have expressed significant apprehension regarding Anthropic, particularly highlighting that the company is currently the sole developer authorized to operate within specific sensitive military networks. The officials assert that Anthropic retains complete authority to modify, restrict, or override the functionalities of its models once they are integrated into Pentagon systems.

In a memo penned by Emil Michael, the undersecretary of defense for research and engineering, it was noted that “Anthropic’s ability to unilaterally alter system guardrails and model weights without [Department of War] consent could fundamentally change the system’s function and creates a significant operational risk.” Michael further accused the company of leveraging ongoing negotiations with the DoD primarily for its own favorable public relations narrative.

“A vendor that raises the prospect of disallowing its software to function in critical military operations, and treats its negotiations with the DoD primarily as tools for brand-building cannot be trusted,” Michael asserted, emphasizing that this perception is particularly troubling in light of the company’s confrontational public stance towards the DoD.

Michael also acknowledged that the military had accepted a certain level of risk by integrating an AI system into its network, particularly owing to Anthropic’s role as the software’s maintainer. However, he specified that this risk became untenable when “Anthropic asserted in negotiations that it would have an approval role in the operational decision chain.” He emphasized that such a position, combined with Anthropic’s adversarial public relations strategy, poses a “fully mature supply-chain risk.” This includes an increased likelihood of model poisoning, insider threats, data exfiltration, and denial of service, collectively endangering military capabilities.

In a detailed evaluation conducted by Exiger Diligence, a third-party firm commissioned by the Pentagon, Anthropic’s overall risk was assessed as “medium” across various categories, including cyber threats, operational risks, and compliance issues.

The legal dispute between Anthropic and the DoD has progressed through multiple courts. Recently, a federal appeals court in Washington upheld the Pentagon’s supply-chain risk designation, enabling the Defense Department to enforce the blacklist. However, in contrast, a separate federal judge in California adopted a more restrained approach, granting Anthropic partial relief that limits the breadth of the government’s authority in applying such designations.

Anthropic contends in its court filings that the supply-chain risk designation is fundamentally flawed, both factually and procedurally, asserting that it jeopardizes numerous lucrative federal contracts while potentially imperiling even greater financial implications if the policy is expanded to other government and commercial partners.

The unfolding saga highlights the delicate and often contentious intersection of technology, military operations, and corporate governance, raising critical questions about trust, accountability, and the evolving role of AI in national defense.

Source link

Exit mobile version