Anthropic Sues U.S. Government Over Supply‑Chain Risk Designation

Anthropic Sues U.S. Government Over Supply‑Chain Risk Designation
The Verge

Key Points

  • Anthropic sued the U.S. government over a supply‑chain risk designation.
  • The company alleges violations of First and Fifth Amendment rights.
  • The designation was issued by the Trump administration and required agencies to stop using Anthropic’s AI within six months.
  • The General Services Administration terminated its contract with Anthropic, ending its services to federal branches.
  • Major partners, including Microsoft, continue limited collaborations while separating Pentagon‑related work.
  • The lawsuit challenges the executive branch’s authority to label a domestic AI firm as a security risk.

Anthropic has filed a lawsuit in a California district court alleging that the U.S. government illegally labeled the AI firm as a supply‑chain risk and ordered all federal agencies to stop using its technology. The company claims the designation, issued by the Trump administration, violates its First and Fifth Amendment rights and exceeds executive authority. The suit follows a series of agency cutoffs, including the General Services Administration terminating its contract, and a broader controversy over the Pentagon’s use of Anthropic’s AI models. Anthropic says it will challenge the designation in court while its major partners continue limited collaborations.

Background

Anthropic, a leading private developer of artificial intelligence, was designated a supply‑chain risk by the U.S. government. The designation, typically applied to foreign firms deemed cybersecurity threats, was unusual for a domestic company. Following the designation, the Trump administration ordered all federal agencies to cease using Anthropic’s technology within six months. The move sparked bipartisan concern about the impact of political disagreement on a company’s ability to operate.

Legal Action

In response, Anthropic filed a lawsuit in a California district court. The complaint alleges that the government’s actions punish the company for its protected speech on AI safety and the limits of autonomous weapons, violating the First Amendment. It also claims the designation infringes on Anthropic’s Fifth Amendment rights and exceeds the executive branch’s authority. The suit seeks to overturn the supply‑chain risk label and restore the company’s ability to contract with federal agencies.

Government and Agency Response

Since the designation, several agencies have halted their use of Anthropic’s services. The General Services Administration terminated its OneGov contract, ending Anthropic’s availability to all three branches of the federal government. The Department of the Treasury, the State Department, and other agencies have also indicated plans to stop using the firm’s technology. The Pentagon declined to comment on the lawsuit.

Corporate Reactions

Major clients such as Microsoft have affirmed their continued partnership with Anthropic but are establishing safeguards to separate work related to the Pentagon from other collaborations. Anthropic maintains that it will challenge the designation in court and continue its focus on responsibly developing emergent AI technology.

Implications

The case highlights tensions between government security concerns and the rights of private AI developers. It raises questions about the scope of executive power in labeling domestic firms as national‑security risks and the potential chilling effect on speech related to AI safety. The outcome could set precedent for how AI companies engage with federal contracts and how policy disagreements are managed in the technology sector.

#Anthropic#Department of Defense#Supply chain risk#Artificial intelligence#First Amendment#Fifth Amendment#Government contracting#Legal battle#AI safety#Pentagon
Generated with  News Factory -  Source: The Verge

Also available in:

Anthropic Sues U.S. Government Over Supply‑Chain Risk Designation | AI News