Anthropic CEO Accuses OpenAI of Lying About Pentagon Deal

Key Points
- Anthropic CEO Dario Amodei calls OpenAI's Pentagon deal claims "straight up lies" and "mendacious".
- Anthropic withdrew from a U.S. intelligence contract over AI safety concerns related to surveillance and autonomous weapons.
- Amodei criticizes OpenAI's "safety theater" and its focus on employee appeasement rather than real safeguards.
- He questions the vague "all lawful use" clause in OpenAI's agreement as a potential loophole.
- OpenAI CEO Sam Altman admits the Pentagon announcement was rushed and sloppy.
- Financial Times reports Anthropic may be back in talks with the Pentagon, but terms are unspecified.
- Public reaction includes a spike in ChatGPT uninstalls and a rise in Claude's App Store rankings.
Anthropic chief executive Dario Amodei sent an internal memo denouncing OpenAI's statements about its new Pentagon agreement as "straight up lies" and "mendacious." The memo follows Anthropic's withdrawal from a separate U.S. intelligence contract over concerns about AI use in mass surveillance and autonomous weapons. Amodei criticizes OpenAI for focusing on employee appeasement rather than genuine safety safeguards and questions the vague "all lawful use" language in the Pentagon deal. OpenAI’s Sam Altman later admitted the announcement was rushed, while reports suggest Anthropic may be re‑entering talks with the Pentagon.
Background
Anthropic, the creator of the Claude AI chatbot, recently pulled out of a proposed contract with U.S. intelligence agencies. The company cited safety concerns regarding the deployment of artificial intelligence for mass surveillance of domestic citizens and for fully autonomous weapons.
Anthropic’s Internal Memo
In response, Anthropic CEO Dario Amodei circulated a lengthy internal memo that harshly attacks OpenAI’s public messaging about its own Pentagon arrangement. Amodei labels OpenAI’s statements as "straight up lies" and calls the overall messaging "mendacious." He argues that OpenAI’s promised "safety layer" and other reassurances amount to "safety theater" and serve more to placate employees than to protect the public.
Critique of OpenAI’s Deal
Amodei takes issue with a clause in OpenAI’s agreement that references "all lawful use," describing the language as a gray area that could enable questionable activities such as domestic surveillance. He also suggests that OpenAI’s emphasis on additional "guardrails" does not meaningfully address the underlying safety risks.
OpenAI’s Response
OpenAI chief Sam Altman later acknowledged that the initial announcement of the Pentagon deal was "rushed" and "sloppy," implying that the company may need to refine its approach to government contracts.
Potential Re‑Engagement with the Pentagon
According to a report from the Financial Times, Anthropic and its Claude model may be exploring a renewed partnership with the U.S. military, though details of any prospective terms remain unclear.
Industry and Public Reaction
The dispute has heightened scrutiny of AI firms’ relationships with the military. In the days following the announcements, the uninstall rate for ChatGPT rose sharply, while Claude experienced a surge in App Store rankings, reflecting mixed public sentiment about the ethical implications of AI deployment in defense contexts.