Anthropic’s Standoff with the Pentagon Over AI Use Policy

Anthropic’s Standoff with the Pentagon Over AI Use Policy
The Verge

Key Points

  • Anthropic’s Claude model is tied to a $200 million Pentagon contract that includes a responsible‑use policy.
  • The Pentagon seeks an "any lawful use" clause, expanding potential military applications.
  • Anthropic refuses to support autonomous lethal weapons and mass domestic surveillance.
  • Designation as a supply‑chain risk could end the contract and force contractors to drop Claude.
  • Other AI firms have already aligned their contracts with the Pentagon’s broader usage demands.
  • The dispute highlights tension between rapid AI adoption for defense and corporate ethical policies.

Anthropic, the AI startup behind the Claude model, is locked in a high‑stakes dispute with the U.S. Department of Defense. The Pentagon wants unrestricted, "any lawful use" of Anthropic’s technology, while the company refuses to support autonomous lethal weapons and mass domestic surveillance. The disagreement threatens a $200 million contract and could force defense contractors to drop Anthropic’s models. The clash highlights the tension between rapid military AI adoption and corporate responsible‑use policies.

Background

Anthropic, known for its Claude AI model, signed a $200 million contract with the Department of Defense last year. The agreement embeds the company’s "acceptable use policy," which bars the use of its technology for autonomous kinetic operations and mass domestic surveillance.

Pentagon’s Position

The Pentagon, led by Undersecretary of Defense for Research and Engineering Emil Michael, is pushing for an "any lawful use" clause that would give the military carte blanche to employ Anthropic’s services in any capacity, including mass surveillance and lethal autonomous weapons.

Anthropic’s Red Lines

Anthropic has made clear it will not comply with requests that conflict with its policy. The company cites existing DoD directives that require human judgment in the use of force and prohibit intelligence collection on U.S. persons without specific legal authority.

Potential Consequences

If the Pentagon classifies Anthropic as a supply‑chain risk, the $200 million contract could be terminated and defense contractors that rely on Claude may be forced to remove the technology from their systems. This would have a ripple effect across the defense industry, which currently uses Anthropic’s model for classified work.

Industry Reaction

Other AI firms such as OpenAI, xAI, and Google have already renegotiated their own Pentagon contracts to align with the "any lawful use" language. However, Anthropic’s stance has drawn criticism and support from various tech workers and policy experts, underscoring the broader debate over responsible AI deployment in national security contexts.

#Artificial Intelligence#Defense#Government Contracts#Anthropic#AI Ethics#National Security#Technology Policy#Military AI#Responsible AI#AI Regulation
Generated with  News Factory -  Source: The Verge

Also available in:

Anthropic’s Standoff with the Pentagon Over AI Use Policy | AI News