OpenAI Secures Pentagon Contract While Anthropic Rejects Terms

OpenAI Secures Pentagon Contract While Anthropic Rejects Terms
The Verge

Key Points

  • OpenAI announced a Pentagon contract that it says aligns with its safety principles.
  • The agreement ties model use to existing U.S. laws such as the Fourth Amendment and the Foreign Intelligence Surveillance Act.
  • Critics highlight the phrase “any lawful use” as allowing broad government application.
  • Technical safeguards like classifiers are included, but their effectiveness is questioned.
  • Anthropic refused a similar deal, was labeled a supply‑chain risk, and plans to contest the designation.
  • Industry reaction has been split, with support for Anthropic’s stance on red lines.
  • The dispute raises concerns about legal interpretations and AI use in surveillance and weapons.

OpenAI announced a new agreement with the Pentagon that it says respects its safety principles on domestic mass surveillance and autonomous weapon systems. Critics point out that the deal relies on the phrase “any lawful use,” which they argue could allow broad government use of the technology. Anthropic refused a similar contract, was labeled a supply‑chain risk, and has drawn industry support. The dispute highlights differing approaches to AI safety, legal compliance, and the role of technical safeguards in military applications.

OpenAI's Pentagon Agreement

OpenAI’s chief executive announced that the company had reached a contract with the Department of Defense. The company emphasized that its two core safety principles—prohibitions on domestic mass surveillance and the requirement for human responsibility in the use of force—are reflected in the agreement. According to OpenAI, the contract ties any use of its models to existing U.S. law, including the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence Surveillance Act of 1978, Executive Order 12333 and relevant Department of Defense directives.

The company also said it would deploy technical safeguards such as classifiers that can monitor model behavior and that some employees would receive security clearances to oversee the systems.

Critics Question the Safeguards

Industry observers and former OpenAI staff argue that the agreement’s reliance on “any lawful use” effectively leaves the Pentagon free to employ the technology for any activity the government deems legal. They note that U.S. intelligence agencies have historically interpreted legal authorities to permit extensive data collection, including bulk domestic surveillance. Critics say the language about “unconstrained,” “generalized” or “open‑ended” use is vague and may permit optionality for the military.

Experts also question the effectiveness of the technical safeguards. Classifiers, they explain, cannot verify whether a human reviewed a decision before a lethal strike or whether a query is part of a mass‑surveillance program. Because the contract allows the government to define what is legal, the safeguards could be overridden if a legal interpretation changes.

Anthropic's Stance and Fallout

Anthropic, a rival AI firm, declined to sign a similar contract that it says would specifically prohibit mass surveillance and unsupervised lethal autonomous weapons. After negotiations collapsed, the Pentagon classified Anthropic as a supply‑chain risk, a designation usually reserved for foreign companies with cybersecurity concerns. Anthropic announced plans to challenge the classification in court.

The disagreement sparked public support for Anthropic within the tech community, with notable figures and users praising the company’s decision to stand by its red lines.

Implications for AI and Defense

The contrasting approaches of OpenAI and Anthropic illustrate a broader debate over how AI companies should engage with military customers. While OpenAI argues that adhering to current laws provides sufficient protection, critics warn that legal frameworks can shift and may not adequately safeguard civil liberties or prevent autonomous weapon use without human oversight.

The situation underscores the importance of clear contractual language, robust technical safeguards, and ongoing public scrutiny as artificial intelligence becomes increasingly integrated into national security operations.

#Artificial intelligence#Defense contracts#Mass surveillance#Lethal autonomous weapons#OpenAI#Anthropic#Pentagon#AI policy#National security#Technology ethics
Generated with  News Factory -  Source: The Verge

Also available in:

OpenAI Secures Pentagon Contract While Anthropic Rejects Terms | AI News