Pentagon Declares Anthropic an Unacceptable Security Risk

Pentagon Declares Anthropic an Unacceptable Security Risk
Engadget

Key Points

  • The Pentagon says continued Anthropic access to warfighting infrastructure would create unacceptable risk.
  • A provision in AI contracts allows use for any lawful purpose; Anthropic refused to accept it.
  • The department warned Anthropic could disable or alter its AI models during operations if corporate "red lines" are crossed.
  • Anthropic sued to challenge a supply‑chain risk designation after refusing certain defense uses.
  • The company seeks a preliminary injunction to pause the ban while contesting the designation.
  • The filing suggests the label could cost Anthropic billions of dollars in revenue.
  • Microsoft, Google, and OpenAI have filed briefs supporting Anthropic.
  • President Trump ordered federal agencies to stop using Anthropic’s technology.

The Department of Defense has argued that allowing Anthropic continued access to its warfighting infrastructure would introduce an unacceptable risk to supply chains and national security. In a court filing responding to Anthropic's lawsuit over a supply‑chain risk designation, the Pentagon cited concerns that the company could disable or alter its AI models during operations if corporate “red lines” were crossed. The filing notes that the agency’s secretary, Pete Hegseth, included a provision in AI contracts permitting use for any lawful purpose, which Anthropic refused, prompting the department to label the partnership unsafe.

Pentagon’s Security Concerns

The Department of Defense contended that granting Anthropic ongoing access to its warfighting infrastructure would "introduce unacceptable risk" to its supply chains, as detailed in a court filing responding to the AI company’s lawsuit. The filing explained that the department’s secretary, Pete Hegseth, had incorporated a provision into AI service contracts that allows the agency to use the technologies for any lawful purpose. Anthropic’s refusal to accept these terms raised doubts about its suitability as a "trusted partner" for highly sensitive initiatives.

The Pentagon warned that AI systems are highly vulnerable to manipulation. It asserted that Anthropic could potentially disable its technology or preemptively alter the behavior of its model before or during ongoing warfighting operations if, in its discretion, the company felt its corporate “red lines” were being crossed. The department deemed this scenario an "unacceptable risk to national security," referencing the agency as the Department of War, the name favored by the Trump administration.

Legal Conflict and Potential Impact

Anthropic sued the government to challenge the supply‑chain risk designation it received after refusing to allow its model to be used for mass surveillance and autonomous weapons development. The company is seeking a preliminary injunction to pause the ban while it contests the designation in court. The filing indicated that Anthropic’s clients could continue working with the company on non‑defense projects, but the label could cause the firm to lose billions of dollars in revenue.

While the filing did not clarify whether Anthropic is still pursuing a new agreement with the government, the New York Times noted that Microsoft, Google, and OpenAI had filed friend‑of‑the‑court briefs in support of Anthropic. The filing also referenced President Trump’s order for federal agencies to stop using Anthropic’s technology, underscoring the administration’s stance on the perceived security risk.

#Department of Defense#Anthropic#AI security#national security#government contract#AI ethics#military AI#Trump administration#supply chain risk#legal dispute
Generated with  News Factory -  Source: Engadget

Also available in: