OpenAI CEO Sam Altman Urges U.S. Government and Anthropic to De‑Escalate AI Tensions

Key Points
- OpenAI CEO Sam Altman urged the Pentagon and Anthropic to stop their escalating AI dispute.
- Anthropic refused to remove safeguards that would allow fully autonomous weapons or mass surveillance.
- The U.S. government labeled Anthropic a "supply‑chain risk" and barred federal use of its Claude model.
- Anthropic sued, claiming constitutional violations; a judge temporarily blocked the Pentagon’s actions.
- Altman argues AI’s geopolitical impact demands government oversight and collaborative governance.
- He expresses cautious trust in democratic institutions despite public skepticism of government.
- Altman warns that AI’s rapid growth outpaces regulatory and institutional capacity.
- The call for cooperation seeks a balance between national‑security needs and ethical safeguards.
Sam Altman, chief executive of OpenAI, called on the Pentagon and Anthropic to stop the growing clash over AI use in national security. In a recent interview, Altman said the technology’s geopolitical weight demands government oversight and a collaborative approach, warning that unchecked competition could jeopardize both safety and innovation.
OpenAI chief executive Sam Altman told reporters that the United States must move past the escalating dispute with Anthropic and find a way to cooperate on artificial‑intelligence governance. The dispute began when Pentagon officials sought a version of Anthropic’s Claude model for military applications, only to hit a wall after the company refused to strip safeguards that block fully autonomous weapons and mass domestic surveillance.
Washington responded with an executive directive that barred federal agencies from using Anthropic’s technology and labeled the firm a “supply‑chain risk.” Anthropic sued, alleging constitutional violations, and a federal judge temporarily halted the Pentagon’s actions.
Altman, speaking with journalist Laurie Segall, framed the conflict as a symptom of a broader struggle over who should wield AI’s power. “Find a way to work together,” he said, urging both sides to stop the escalation. He added that the stakes are “the highest‑order bit in geopolitics” and that AI will shape future wars, cyber defenses, and national‑security decisions.
Unlike some AI leaders who view government with suspicion, Altman expressed a tentative trust in democratic institutions. He acknowledged public wariness, noting many “really don’t trust the government to follow the law,” yet argued that the technology’s impact is too consequential to leave solely to private firms.
“The future of the world and the decisions about the most important elements of national security should be made through a democratically elected process,” Altman said. He warned that AI’s rapid advancement is outpacing the ability of governments, regulators, and even most individuals to calibrate its risks.
Altman’s remarks come as AI companies continue to lobby for light regulation while touting the technology’s promise for national‑security missions. He cautioned that the industry cannot claim both unfettered innovation and a hands‑off approach to governance. “If AI is as geopolitically consequential as everyone keeps insisting, then governments are going to want a hand on the wheel,” he said.
The OpenAI chief also stressed that collaboration does not mean surrendering control. “I don’t think it works for our industry to say, ‘Hey, this is the most powerful technology humanity has ever built,’ and then hand it over without oversight,” Altman explained. He called for a balanced partnership where companies help the government safeguard critical infrastructure without compromising ethical safeguards.
While Altman’s call for cooperation may not resolve the legal battles overnight, it signals a shift toward seeking common ground. The Pentagon’s push for AI capabilities, Anthropic’s resistance to removing safety features, and the broader regulatory debate are likely to remain flashpoints as lawmakers grapple with how to integrate powerful models into defense strategies without endangering civil liberties.