Anthropic Rejects Pentagon Demand to Remove AI Guardrails

Anthropic Rejects Pentagon Demand to Remove AI Guardrails
Engadget

Key Points

  • Defense Secretary Pete Hegseth set a deadline of 5:01 PM on Friday for Anthropic to drop safety safeguards on Claude.
  • The Pentagon threatened to cancel a $200 million contract and label Anthropic a supply‑chain risk.
  • CEO Dario Amodei said Anthropic cannot in good conscience comply and will keep the safeguards in place.
  • Anthropic offered to support a smooth transition to another AI provider if the DoD decides to offboard them.
  • The request would allow Claude to be used for mass surveillance and fully autonomous weapons.
  • Claude is currently the only AI model approved for the military's most sensitive tasks.
  • The DoD is evaluating alternatives such as Grok, Google's Gemini, and OpenAI.
  • The dispute underscores the clash between AI safety priorities and defense‑sector demands.

Defense Secretary Pete Hegseth gave Anthropic a deadline of 5:01 PM on Friday to drop safety safeguards on its Claude AI system, threatening to cancel a $200 million contract and label the firm a supply‑chain risk. CEO Dario Amodei responded that Anthropic cannot in good conscience comply, insisting on keeping the safeguards while remaining willing to support the military. The Pentagon’s request would allow Claude to be used for mass surveillance and fully autonomous weapons, a use case Anthropic refuses. The standoff raises questions about AI safety, government contracts, and potential alternatives such as Grok, Google’s Gemini, and OpenAI.

Background

The U.S. Department of Defense, led by Defense Secretary Pete Hegseth, demanded that Anthropic make its Claude AI model available for "all lawful purposes," explicitly including mass surveillance and the development of fully autonomous weapons that could operate without human oversight. The department warned that failure to comply could result in the cancellation of a $200 million contract and the designation of Anthropic as a "supply chain risk," a label traditionally reserved for firms from adversarial nations.

Anthropic’s Response

Anthropic CEO Dario Amodei posted a blog stating that the company cannot, in good conscience, remove the safety guardrails that prevent Claude from being used in the aforementioned ways. Amodei emphasized a "strong preference" to continue serving the Department and its warfighters while retaining the two requested safeguards. He also pledged to facilitate a smooth transition to another provider if the Pentagon decides to offboard Anthropic, aiming to avoid disruption to ongoing military planning and critical missions.

Pentagon’s Counter‑move

In reaction, Under Secretary of Defense Emil Michael accused Amodei of wanting "nothing more than to try to personally control the US military" and suggested that the CEO was putting national safety at risk. The Pentagon set a firm deadline of 5:01 PM on Friday for Anthropic to accept the terms, simultaneously requesting an assessment of its reliance on Claude as an initial step toward potentially labeling the firm a supply‑chain risk.

Implications for Military AI Use

Claude has been the sole AI model approved for the most sensitive military tasks, including intelligence analysis, weapons development, and battlefield operations. Reports indicate Claude was used in the Venezuelan raid that exfiltrated President Nicolás Maduro and his wife. The Department’s push to expand Claude’s permissible uses to mass surveillance and autonomous weaponry would mark a significant escalation in the scope of AI deployment within defense operations.

Potential Alternatives

The DoD is reportedly evaluating other AI providers such as Grok, Google’s Gemini, and OpenAI as possible replacements should Anthropic be offboarded. However, transitioning away from Claude could prove complex given its unique integration into critical defense workflows.

Broader Context

The standoff highlights the tension between AI safety advocates and government agencies seeking broader AI capabilities for national security. While AI companies face criticism for potential user harm, the prospect of mass surveillance and autonomous weapons raises the stakes dramatically. Anthropic’s refusal tests its claim of being the most safety‑forward AI firm, especially after recently dropping its flagship safety pledge.

Looking Ahead

The next steps hinge on the Pentagon’s willingness to follow through on its threats. A cancellation of the contract or a supply‑chain risk designation could have serious financial and operational repercussions for Anthropic, while also influencing how other AI firms negotiate safety requirements with government customers.

#Artificial Intelligence#Defense Department#AI Safety#Government Contracts#Supply Chain Risk#Anthropic#Claude#Military AI#Autonomous Weapons#Mass Surveillance#Tech Policy
Generated with  News Factory -  Source: Engadget
Anthropic Rejects Pentagon Demand to Remove AI Guardrails | AI News