Trump Orders Federal Halt to Anthropic’s Claude AI Over Surveillance Concerns

Key Points
- President Trump ordered an immediate stop to federal use of Anthropic's Claude AI.
- Anthropic refuses to let the Pentagon use Claude for mass domestic surveillance or fully autonomous weapons.
- The president labeled Anthropic a "radical left, woke company" and set a six‑month phase‑out.
- Defense Secretary Pete Hegseth threatened to invoke powers to force compliance or label Anthropic a supply‑chain risk.
- Anthropic CEO Dario Amodei said the company cannot in good conscience remove its safety clauses.
- Legal experts note ambiguity in contract terms like "lawful purposes" and the importance of clear safeguards.
- Employees at OpenAI and Google have shown support for Anthropic's stance on AI red lines.
- The dispute highlights the gap between rapid AI deployment in government and lagging regulatory frameworks.
President Donald Trump instructed U.S. federal agencies to stop using Anthropic's Claude artificial‑intelligence system after the company refused to let the Department of Defense apply the technology for mass domestic surveillance or fully autonomous weapons. The president’s post on Truth Social called Anthropic a "radical left, woke company" and set a six‑month phase‑out for agencies. Anthropic CEO Dario Amodei said the firm could not in good conscience remove contract clauses that prohibit use of Claude in autonomous weapons or surveillance. The clash highlights growing tension between government demands and AI firms’ safety commitments.
Background
Anthropic’s Claude AI model has become the most widely used artificial‑intelligence system within the U.S. military, appearing in classified Pentagon projects and other federal applications. The company was founded with an explicit focus on AI safety and has embedded contractual safeguards that bar the use of Claude for mass domestic surveillance of Americans or for fully autonomous weapons systems without human oversight.
The Dispute
President Donald Trump used his Truth Social platform to order an immediate cessation of federal use of Claude, calling the company a “radical left, woke company.” He announced a six‑month phase‑out for agencies such as the Department of Defense. Earlier in the week, Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that he would invoke rarely used powers to force the company to allow the Pentagon to use Claude for any lawful purpose, or label Anthropic a supply‑chain risk. Hegseth gave Anthropic a Friday deadline to comply.
Amodei responded that Anthropic “cannot in good conscience accede” to the Pentagon’s request to remove the contractual provisions that prohibit use of Claude in autonomous weapons or domestic surveillance. He warned that existing laws have not kept pace with AI’s ability to aggregate scattered, innocuous data into comprehensive personal profiles at massive scale, raising significant privacy concerns.
Broader Context
Legal experts noted that contract language around “lawful purposes” is often ambiguous, and Anthropic’s stance reflects a broader industry reluctance to enable mass surveillance or lethal autonomous weapons. Employees at rival firms such as OpenAI and Google have circulated petitions urging their companies to stand with Anthropic on these red‑line issues. OpenAI’s CEO Sam Altman reportedly affirmed similar guardrails, emphasizing technical safeguards like cloud‑only deployment.
The clash underscores a growing mismatch between rapid AI adoption in government and military settings and the slower development of regulatory oversight. Critics argue that the dispute could set precedents for how tech companies negotiate with government agencies when ethical boundaries are at stake.