Anthropic Sues U.S. Government Over Supply Chain Risk Designation

Anthropic Sues U.S. Government Over Supply Chain Risk Designation
Engadget

Key Points

  • Anthropic filed a lawsuit to block a Pentagon supply‑chain risk designation.
  • The company claims the designation violates free‑speech and due‑process rights.
  • Defense officials pressured Anthropic to remove safeguards against surveillance and autonomous weapons.
  • Anthropic’s CEO Dario Amodei refused to compromise on those safeguards.
  • The Pentagon threatened to cancel a $200 million contract and add Anthropic to a blocklist.
  • Anthropic’s statement calls the government’s actions unprecedented and unlawful.
  • OpenAI later secured a separate Defense Department contract with similar safety clauses.
  • OpenAI’s robotics head resigned, citing concerns over surveillance and lethal autonomy.

Anthropic has filed a lawsuit to block the Pentagon from adding the AI firm to a national‑security blocklist after the Department of Defense labeled it a supply‑chain risk. The company argues the designation violates free‑speech and due‑process rights and lacks statutory authority. The legal action follows weeks of tension with the Defense Department, which pressed Anthropic to remove safeguards against mass surveillance and autonomous weapons. Anthropic’s CEO Dario Amodei refused, leading to threats of contract cancellation and a broader government push to bar the firm from federal use. OpenAI later secured a deal with the Defense Department, emphasizing similar safety principles.

Background

Anthropic, a leading artificial‑intelligence developer, received a letter from the Department of Defense confirming that the agency had labeled the company a supply‑chain risk. This designation would place Anthropic on a national‑security blocklist, effectively barring the firm from many federal contracts.

Weeks of back‑and‑forth between Anthropic and the Defense Department preceded the lawsuit. In late February, Defense Secretary Pete Hegseth and senior officials pressured Anthropic to remove safeguards that prevent the company's models from being used for mass surveillance or the development of autonomous weapons. CEO Dario Amodei made clear the company would not consent to such uses.

Legal Action

When Anthropic refused to alter its safeguards, the Pentagon threatened to add the firm to the supply‑chain risk list and to cancel a $200 million contract. The company responded by filing a lawsuit seeking judicial review of the designation. The complaint alleges that the government’s action is unlawful, violates Anthropic’s free‑speech and due‑process rights, and lacks any authorizing federal statute.

Anthropic’s statement to the press says, “These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.” The lawsuit characterizes the government’s conduct as an “unprecedented and unlawful … campaign of retaliation.”

Company Position

Anthropic emphasized that pursuing legal review does not change its “longstanding commitment to harnessing AI to protect our national security,” but it is a necessary step to protect its business, customers, and partners. The company also noted that it had agreed to “collaborate with the Department on an orderly transition to another AI provider willing to meet its demands.”

Industry Reaction

OpenAI entered the picture by securing a separate agreement with the Defense Department. OpenAI CEO Sam Altman highlighted the company’s safety principles, including prohibitions on domestic mass surveillance and human responsibility for the use of force, including autonomous weapon systems. The contract explicitly states that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Following OpenAI’s deal, the company’s head of robotics hardware resigned, and employee Caitlin Kalinowski posted on X that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

Implications

The lawsuit underscores a growing clash between the federal government’s security objectives and AI developers’ ethical safeguards. It also highlights the legal uncertainties surrounding the government’s authority to impose supply‑chain risk designations on technology firms.

#Artificial Intelligence#Legal Dispute#National Security#Supply Chain Risk#Free Speech#Due Process#Defense Department#AI Ethics#Surveillance#Autonomous Weapons
Generated with  News Factory -  Source: Engadget

Also available in:

Anthropic Sues U.S. Government Over Supply Chain Risk Designation | AI News