OpenAI and Google Engineers Back Anthropic’s Lawsuit Against Pentagon

OpenAI and Google Engineers Back Anthropic’s Lawsuit Against Pentagon
The Verge

Key Points

  • Anthropic sued the Pentagon after being labeled a supply‑chain risk for refusing to enable mass surveillance or autonomous lethal weapons.
  • Nearly 40 engineers and researchers from OpenAI and Google filed an amicus brief supporting Anthropic.
  • The brief argues the risk designation is improper retaliation that harms public interest.
  • Authors warn that AI‑driven integration of fragmented data could create real‑time nationwide surveillance.
  • Autonomous weapons are described as unreliable in novel conditions and prone to hallucination.
  • The brief calls for technical safeguards or usage restrictions to keep humans in the decision loop.
  • Signatories emphasize a shared conviction that AI risks require guardrails despite diverse politics.

Anthropic sued the Department of Defense after being labeled a supply‑chain risk for refusing to enable domestic mass surveillance and fully autonomous lethal weapons. Hours later, nearly 40 engineers, researchers and scientists from OpenAI and Google filed an amicus brief supporting Anthropic, warning that the designation threatens public interest and that the two red lines reflect genuine risks. The brief emphasized concerns about AI‑driven mass surveillance and the unreliability of autonomous weapons, calling for technical safeguards or usage restrictions.

Anthropic’s Legal Challenge

Anthropic filed a lawsuit against the Department of Defense after the agency designated the company as a supply‑chain risk. The designation, typically reserved for foreign firms deemed a national‑security threat, was applied because Anthropic refused to relax two red lines that prohibit the use of its technology for domestic mass surveillance and fully autonomous lethal weapons.

Industry Response

Within hours of the filing, nearly 40 employees from OpenAI and Google, including senior figures, submitted an amicus brief in support of Anthropic’s case. The signatories described themselves as engineers, researchers, scientists and other professionals employed at leading U.S. frontier artificial‑intelligence laboratories.

The brief argues that the supply‑chain risk label is an improper retaliation that harms the public interest. It stresses that Anthropic’s red lines are rooted in real concerns that require a response.

Risks of Domestic Mass Surveillance

The brief notes that while data on American citizens exists in many fragmented forms—surveillance cameras, geolocation data, social‑media posts, financial transactions—an AI layer that unifies these streams could create a real‑time, nationwide surveillance apparatus. The authors warn that such capability poses profound risks to democratic governance, even if used responsibly.

Concerns About Autonomous Lethal Weapons

The authors point out that autonomous weapons can be unreliable in novel or ambiguous conditions, lacking the nuanced judgment humans provide. They also highlight the phenomenon of AI hallucination, which can obscure the reasoning behind target identification, making it essential to keep humans in the loop before any lethal munition is launched.

Because these systems may not reliably distinguish targets or account for collateral effects, the brief calls for technical safeguards or usage restrictions to prevent deployment without human oversight.

Unified Voice Across Companies

Although the signatories come from different companies and hold diverse political views, they share a conviction that frontier AI systems present risks when deployed for mass surveillance or autonomous lethal weapons. They urge the development of guardrails—whether technical or policy‑based—to address these dangers.

#artificial intelligence#AI ethics#government regulation#defense contracts#technology lawsuit#industry protest#AI safety#national security#autonomous weapons#mass surveillance
Generated with  News Factory -  Source: The Verge

Also available in: