Anthropic Resumes Negotiations with U.S. Defense Department Over AI Contract

Key Points
- Anthropic CEO Dario Amodei is back in talks with the U.S. Defense Department.
- The dispute centers on a contract clause about "analysis of bulk acquired data."
- The Pentagon threatened to label Anthropic a supply‑chain risk and cancel the deal.
- A presidential order directed agencies to stop using Anthropic’s technology.
- Both sides are working to delete the contentious phrase and preserve the partnership.
Anthropic CEO Dario Amodei has re‑opened talks with the U.S. Defense Department after a dispute over contract language concerning the use of the company’s AI models for bulk data analysis. The disagreement stemmed from a clause the Pentagon wanted removed, which Anthropic feared could enable mass surveillance. The department had threatened to label Anthropic a supply‑chain risk and cancel its existing agreement, a move that previously led to a presidential directive to halt the use of its technology. Both parties are now working to resolve the language issue and preserve the partnership.
Background of the Dispute
Anthropic originally signed a multi‑year contract with the Defense Department in 2025, valued at $200 million. During subsequent negotiations, the Pentagon sought to include language that would allow the use of Anthropic’s AI models for the analysis of bulk‑acquired data. Anthropic’s leadership argued that this clause could be used for mass surveillance, and they insisted on removing the specific phrase.
Escalation and Government Response
When Anthropic refused to amend the contract, the Defense Department threatened to cancel the existing agreement and label the company a "supply chain risk," a designation typically reserved for foreign entities. The threat prompted a presidential order directing all government agencies to stop using Anthropic’s technology.
Current Negotiations
According to reports, Amodei has resumed discussions with Under Secretary of Defense for Research and Engineering Emil Michael. The two are attempting to resolve the contractual language dispute. The department reportedly offered to accept Anthropic’s terms if the contentious phrase about "analysis of bulk acquired data" is deleted, which Anthropic identified as the single line that matched its primary concern.
Implications for Government Use
The contract includes a six‑month phase‑out period that would have allowed the government to continue using Anthropic’s AI tools for certain operations, such as staging an air strike. The ongoing talks aim to prevent a full termination of the partnership and to avoid the supply‑chain risk label.
Industry Context
The dispute has highlighted differences in how AI companies approach government contracts, especially regarding surveillance and ethical use. Competing firms have taken varied stances, with some emphasizing explicit prohibitions on mass surveillance in their agreements.