OpenAI’s Pentagon Deal Raises Concerns Over Military Use and Domestic Surveillance

OpenAI’s Pentagon Deal Raises Concerns Over Military Use and Domestic Surveillance
TechRadar

Key Points

  • OpenAI signed a new contract with the U.S. Department of Defense that may allow AI‑driven domestic surveillance.
  • Anthropic lost a $200 million Pentagon contract after refusing to support autonomous weapons and surveillance uses.
  • OpenAI removed its 2023 ban on military use of its models and partnered with defense firm Anduril for national‑security purposes.
  • The Pentagon accessed OpenAI technology via Microsoft’s Azure platform, bypassing earlier usage restrictions.
  • Experts warn that current regulations lag behind AI advances, risking privacy violations for ordinary citizens.
  • OpenAI researchers say the original contract language left unanswered questions about novel surveillance capabilities.
  • Former OpenAI geopolitics head highlights civilian harm and opacity as major concerns.

OpenAI has entered a new contract with the U.S. Department of Defense that critics say leaves room for the technology to be used in mass domestic surveillance and autonomous weapons. The agreement follows Anthropic’s loss of a $200 million Pentagon contract after refusing such uses. While OpenAI removed a 2023 ban on military applications and signed a deal with Anduril for national‑security purposes, experts warn that current regulations lag behind AI advances, risking privacy violations for everyday citizens.

Background

Anthropic, an AI firm, was labeled a supply‑chain risk by Defense Secretary Pete Hegseth and subsequently lost a $200 million Pentagon contract after refusing to allow its models to be used for autonomous weapons systems and mass domestic surveillance. This development set the stage for OpenAI’s latest engagement with the U.S. military.

OpenAI’s Pentagon Contract

OpenAI signed a new agreement with the Department of Defense that, according to internal sources, contains language that could permit the use of its artificial‑intelligence models for domestic surveillance and other contentious purposes. Earlier in 2023, OpenAI had a contract clause that barred military use of its models, but employees have disclosed that the Pentagon accessed OpenAI technology through a Microsoft‑Azure arrangement that was not subject to the same restrictions.

In 2024, OpenAI removed the blanket ban on military applications of its models and later entered a contract with defense contractor Anduril to deploy its models for national‑security missions. OpenAI CEO Sam Altman has publicly expressed support for Anthropic’s stance against using AI for nefarious purposes, yet the new agreement appears to leave similar avenues open.

Regulatory Gaps and Privacy Risks

Current regulations have not kept pace with rapid AI advancements, creating opportunities for government agencies to acquire personal data from data brokers and employ AI to generate detailed citizen profiles. Critics argue that the contract’s wording fails to address novel ways AI could enable legal surveillance, raising concerns about the opacity of military AI use and its impact on civilian privacy.

Expert Reactions

OpenAI researcher Noam Brown noted that the original contract language left “legitimate questions unanswered” about how AI might be used for surveillance, and that the updated language attempts to address those concerns. Former head of OpenAI’s geopolitics team Sarah Shoker warned that everyday people and civilians in conflict zones are the biggest losers, as technical design and policy opacity hinder understanding of military AI effects.

Overall, the deal places OpenAI under scrutiny similar to that faced by Anthropic, highlighting the tension between national‑security objectives and the need for robust safeguards against misuse of artificial‑intelligence technologies.

#OpenAI#Pentagon#AI ethics#military AI#surveillance#Anthropic#data privacy#national security#AI regulation#Sam Altman
Generated with  News Factory -  Source: TechRadar

Also available in: