Federal Judge Blocks Trump’s Designation of Anthropic as Security Risk
A federal judge in California has intervened to prevent the Trump administration from labeling Anthropic a supply chain risk to national security, thereby halting the AI company’s collaboration with federal agencies.
Litigation Initiated Following Supply Chain Risk Designation
This month, Anthropic filed a lawsuit against the Pentagon and other federal agencies after the Department of Defense identified the company as posing a “supply chain risk to national security.” Former President Donald Trump subsequently announced intentions to prohibit the use of Anthropic products across various federal entities.
Judge Critiques Lack of Justification for Risk Designation
California U.S. District Judge Rita Lin characterized the designation of Anthropic as a ‘supply chain risk’ as potentially arbitrary and without legal substantiation. In her ruling, she noted, “The Department of the Army provides no legitimate basis for inferring that Anthropic’s outspoken claims about restricted use make it a potential saboteur.”
Administration Given Time to Appeal Decision
Judge Lin’s order includes a one-week hold, allowing the Trump administration the opportunity to file an appeal against the ruling.
Anthropic Expresses Satisfaction with Judicial Outcome
In response to the ruling, an Anthropic spokesperson welcomed the court’s swift action, adding that the decision reinforces the company’s likelihood of success on the merits of the case. The spokesperson emphasized that, while the lawsuit was necessary for the protection of Anthropic, its clients, and partners, the company remains committed to productive collaboration with the government to advance safe and reliable AI solutions for all Americans.
Supply Chain Risk Designation Significant for Defense Operations
If the supply chain risk designation were enforced, it would compel the Department of Defense and its contractors to cease utilizing Anthropic’s commercial AI services in defense operations. In late February, Defense Secretary Pete Hegseth stated on X that he was directing the classification of the company as a “supply chain risk.” Moreover, Trump indicated that he would instruct all federal agencies, including the Treasury and State Departments, to discontinue the use of Anthropic’s AI technology.
Judge’s Ruling Restores Operational Status with Anthropic
Judge Lin’s order also prevents other federal agencies from ceasing their work with Anthropic, effectively restoring prior operational dynamics. The ruling clarifies that while the Department of the Army is not mandated to use Anthropic’s products, the agency is also not prohibited from engaging with other AI providers, in accordance with existing laws and regulations.
Anthropic’s Legal Actions Against the Pentagon
Anthropic has initiated two legal actions against the Department of Defense, including one in the U.S. District Court for Northern California and another in the U.S. Court of Appeals for the District of Columbia. The company argues that the government’s actions constitute more than a standard contractual dispute, describing them as an “unlawful retaliatory campaign” following tense negotiations regarding the military’s use of its AI systems.
Concerns Over Military Uses of AI Technology
The company has sought stronger assurances that its AI technologies will not be employed for autonomous weaponry or extensive domestic surveillance initiatives. Notably, Anthropic is the creator of the Claude chatbot and is the sole AI company permitted to integrate its services within the Department of Defense’s classified networks. In contrast, shortly after Hegseth’s announcement last month, OpenAI CEO Sam Altman revealed that his organization had secured an agreement to utilize its services in a classified context.
Judicial Ruling Highlights Due Process Concerns
Judge Lin underscored the procedural deficiencies surrounding the government’s actions, stating, “Although Anthropic had notice that the government objected to the terms of its contract, it had no notice or opportunity to object until defendants publicly banned Anthropic from all federal government business.” The ruling highlighted a lack of opportunity for Anthropic to challenge the basis for the supply chain risk designation, further emphasizing the need for due process.
