Pete Hegseth tells Anthropic to align with DoD AI demands or face exclusion

Key Points
- Pete Hegseth warned Anthropic to comply with DoD AI demands or risk removal from the defense supply chain.
- The Defense Production Act enables the administration to allocate resources for national defense.
- DoD’s AI strategy aims to use artificial intelligence broadly to reshape military affairs over the next decade.
- Anthropic raised concerns about using its models for lethal missions without a human in the loop.
- The company also advocated for stricter rules on AI‑enabled mass domestic surveillance.
- Excluding Anthropic could affect its $200 million defense contract and partners such as Palantir.
- Claude was used in the U.S. operation that captured Venezuelan leader Nicolás Maduro.
- Anthropic’s leadership emphasized they have not objected to legitimate military operations.
Pentagon leader Pete Hegseth warned AI firm Anthropic that it must cooperate with the Department of Defense’s AI strategy or risk being removed from the defense supply chain. The department’s recent AI strategy emphasizes open‑ended use of artificial intelligence to reshape warfare, while Anthropic has raised concerns about the reliability of its models for lethal missions without a human in the loop and has advocated for stricter rules on domestic surveillance uses. A potential cut would affect Anthropic’s $200 million contract and its partners such as Palantir.
Background on Defense Production Authority
The Defense Production Act (DPA) gives the administration the authority to allocate materials, services, and facilities for national defense. Past administrations have invoked the DPA to address shortages of medical supplies during the coronavirus pandemic and to boost production of critical minerals.
DoD’s AI Strategy and Hegseth’s Memo
The Pentagon has pushed for open‑ended use of AI technology, seeking to expand the set of tools available to counter threats and conduct military operations. The department released its AI strategy last month, and Pete Hegseth emphasized in a memo that “AI‑enabled warfare and AI‑enabled capability development will redefine the character of military affairs over the next decade.” He added that the U.S. military “must build on its lead” over foreign adversaries to make soldiers “more lethal and efficient,” noting that the AI race is “fueled by the accelerating pace” of private‑sector innovation.
Anthropic’s Concerns
Anthropic has expressed particular concern about its models being used for lethal missions that do not have a human in the loop, arguing that state‑of‑the‑art AI models are not reliable enough to be trusted in those contexts. The company has also pushed for new rules to govern the use of AI models for mass domestic surveillance, even where such use is legal under current regulations.
Potential Consequences of Exclusion
A decision to cut Anthropic from the Defense Department’s supply chain would have significant ramifications for national‑security work and for the company, which holds a $200 million contract with the department. The move would also impact partners, including Palantir, that incorporate Anthropic’s models into their systems.
Recent Operational Use and Dialogue
Anthropic’s model Claude was used in the U.S. capture of Venezuelan leader Nicolás Maduro in January, prompting queries from the company about exactly how its model was employed. A source familiar with a recent meeting said Anthropic co‑founder Dario Amodei stressed to Hegseth that the company had never objected to legitimate military operations. The Defense Department declined to comment on the matter.