Appeals Court Keeps Anthropic Supply‑Chain Risk Label in Place

Key Points
- The D.C. Circuit Court of Appeals upheld the Pentagon’s supply‑chain risk label on Anthropic.
- The decision conflicts with a San Francisco district court order that removed the label last month.
- Judges argued that lifting the label could hinder military operations during an ongoing conflict.
- Anthropic claims the designation harms its business and that its Claude model is not fit for fully autonomous weapons.
- Acting Attorney General Todd Blanche hailed the ruling as a win for military readiness.
- Legal experts see the case as a test of executive authority over domestic tech firms.
- Oral arguments before the D.C. Circuit are set for May 19.
- Details on the DoD’s use of Claude and its shift to other AI vendors remain limited.
A three‑judge panel of the U.S. Court of Appeals for the D.C. Circuit ruled Wednesday that Anthropic, the creator of the Claude AI model, must remain designated as a supply‑chain risk for the Pentagon. The decision contradicts a recent San Francisco district court order that had lifted the label, leaving the military’s access to Anthropic’s tools in limbo as the two cases proceed toward final judgments.
The U.S. Court of Appeals for the District of Columbia Circuit issued a stay on Wednesday that preserves the Pentagon’s supply‑chain risk designation on Anthropic, the company behind the Claude AI system. The appellate panel, in a 2‑1 decision, said removing the label would jeopardize military operations during an ongoing conflict, even though the company may face financial harm.
Anthropic’s legal fight stems from two separate statutes the Department of Defense used to bar the firm from supplying AI tools to the armed forces. A San Francisco federal judge last month found the DoD acted in bad faith, citing the company’s pushback against restrictive usage policies and its public criticism of those limits. That judge ordered the risk label removed, prompting the Trump administration to restore access to Claude across the Pentagon and other federal agencies.
In Washington, the appellate court focused on a different statutory provision but reached the opposite conclusion. The judges emphasized the unique pressures of wartime procurement, noting that “granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict.” The panel acknowledged Anthropic’s potential loss of revenue but prioritized the Department of Defense’s judgment on national‑security matters.
Acting Attorney General Todd Blanche praised the decision on X, calling it “a resounding victory for military readiness.” He reiterated that the commander‑in‑chief and the Department of War (as the Pentagon calls itself under the current administration) must retain full access to AI models integrated into sensitive systems.
Anthropic’s spokesperson, Danielle Cohen, expressed gratitude that the D.C. court recognized the urgency of the issue and reaffirmed the company’s confidence that the courts will eventually deem the designations unlawful. The firm argues that the label has cost it contracts and that its Claude model lacks the precision required for fully autonomous weapon systems, a stance that has drawn criticism from the Pentagon.
Legal experts say the case tests the breadth of executive power over private tech firms, especially as the Pentagon accelerates AI deployment in its conflict with Iran. Some scholars warn that the DoD’s actions could stifle open debate among AI researchers about model performance and safety.
Both lawsuits are expected to continue for months. The D.C. Circuit will hear oral arguments on May 19, while the San Francisco case proceeds on its own timeline. Details about how the Department of Defense has used Claude, or the extent of its transition to alternatives from Google DeepMind, OpenAI, or other vendors, remain scarce.
As the legal battles unfold, Anthropic faces uncertainty about its role in federal AI initiatives. The outcome may shape how future supply‑chain risk designations are applied to domestic tech companies, a question that looms large for the industry and for national‑security policymakers alike.