Judge Calls Pentagon’s Move to Label Anthropic a Supply‑Chain Risk ‘Attempt to Cripple’ Company

Key Points
- U.S. District Judge Rita Lin calls the Pentagon's supply‑chain‑risk label on Anthropic an "attempt to cripple" the company.
- Anthropic sued, claiming the designation is illegal retaliation for seeking limits on military use of its Claude AI system.
- The Department of Defense defends the label as a security measure, citing concerns about AI reliability in critical moments.
- Defense Secretary Pete Hegseth announced a blanket ban on contractors working with Anthropic, though legal authority for the ban is unclear.
- The case raises First Amendment questions and broader debate over AI use in the armed forces.
- The Pentagon plans to replace Anthropic technology with alternatives from Google, OpenAI and xAI.
- Judge Lin may grant a temporary injunction to pause the designation pending a full merits determination.
- A related appeal is pending in a federal appeals court, with a decision expected soon.
During a hearing, U.S. District Judge Rita Lin questioned the Department of Defense’s decision to label AI developer Anthropic a supply‑chain risk, describing it as an apparent attempt to cripple the company after it sought limits on military use of its Claude tool. Anthropic has filed lawsuits alleging illegal retaliation, and the judge is considering a temporary injunction that could pause the designation. The case highlights tensions over AI use in the armed forces, First Amendment concerns, and the Pentagon’s authority to restrict contractors.
Background of the Dispute
Anthropic, the creator of the Claude artificial‑intelligence system, has taken legal action against the U.S. Department of Defense after the Pentagon designated the company a supply‑chain risk. The label was applied after Anthropic pushed for restrictions on how its AI tools could be employed by the military. The company argues that the designation is retaliation for its public scrutiny of a contract dispute, potentially violating First Amendment protections.
Judicial Scrutiny
During a court hearing, District Judge Rita Lin expressed concern that the Pentagon’s action resembled an effort to cripple Anthropic. She noted that the supply‑chain‑risk authority is typically reserved for foreign adversaries, terrorists and other hostile actors, and questioned whether the designation was appropriately tailored to genuine national‑security concerns. Judge Lin indicated that she could issue a temporary order to pause the designation only if she finds Anthropic likely to succeed on the merits of its case.
Government Position
The Department of Defense, referring to itself as the Department of War, defended its decision by asserting that Anthropic’s AI tools could not be relied upon during critical moments. A Trump‑administration attorney, Eric Hamilton, argued that the department had followed proper procedures and that the security assessment should not be second‑guessed. The Pentagon also announced plans to replace Anthropic’s technology with alternatives from Google, OpenAI and xAI, and claimed to have safeguards to prevent any tampering during the transition.
Contractor Restrictions and Legal Authority
Defense Secretary Pete Hegseth posted a statement indicating that any contractor, supplier or partner doing business with the U.S. military was barred from commercial activity with Anthropic. However, during the hearing Hamilton acknowledged that Hegseth lacks legal authority to impose such a blanket ban on contractors for work unrelated to the Department of Defense. When asked why Hegseth made the statement, Hamilton said he did not know.
Implications for AI in the Military
The case has sparked a broader public conversation about the role of artificial intelligence in armed forces and the degree of deference Silicon Valley firms should give to government determinations about technology deployment. Critics argue that the Pentagon’s approach may set a precedent for punitive measures against companies that raise concerns about military applications of AI.
Next Steps
Judge Lin is expected to issue a ruling on the temporary injunction in the coming days. A related appeal is also pending in a federal appeals court in Washington, D.C., with a decision anticipated soon. The outcome will shape both Anthropic’s relationship with the government and the broader landscape of AI procurement for national‑security purposes.