Google and OpenAI Employees Sign Open Letter Demanding Limits on Military AI

Key Points
- Nearly a thousand Google and OpenAI employees signed an open letter on military AI.
- The letter urges companies to maintain clear ethical limits on AI for surveillance and autonomous weapons.
- It references past employee protests at Google over Project Maven.
- Pentagon labeled Anthropic a supply‑chain risk after refusing to enable mass surveillance or autonomous weapons.
- Signatories include staff from rival AI firms, showing cross‑company solidarity.
- Workers aim to influence corporate policy on defense contracts and uphold AI Principles.
Nearly a thousand engineers from Google and OpenAI have signed an open letter urging their companies to reject Pentagon pressure to expand the military use of artificial intelligence. The letter, framed as a show of solidarity, calls for clear ethical boundaries on AI applications in surveillance and autonomous weapons. It references past internal protests at Google over Project Maven and highlights Anthropic’s recent designation as a supply‑chain risk after refusing to enable mass surveillance or fully autonomous weapons. The workers hope their collective voice will influence corporate policy on defense contracts.
Workers Unite Over Military AI Concerns
Almost a thousand employees from the two leading AI labs, Google and OpenAI, have come together to sign an open letter that asks their employers to push back against U.S. military pressure to broaden the permissible uses of artificial intelligence. The signatories frame their message as a unified stance, declaring that they will not be divided on the issue of AI’s role in defense.
Key Demands and Ethical Framing
The letter calls for clear limits on AI technologies that could be employed for surveillance or fully autonomous weapons. It urges the companies to maintain the ethical boundaries set out in existing AI principles and to resist any attempts by government officials to erode those safeguards.
Context of Recent Government Actions
The Pentagon recently labeled Anthropic, another AI firm, a “supply‑chain risk” after the company refused to allow its technology to be used for domestic mass surveillance or fully autonomous weapons. This designation has heightened concern among engineers who see a pattern of increasing pressure on AI developers to support defense initiatives.
Historical Precedent at Google
The open letter echoes earlier employee activism at Google, where thousands protested the company’s involvement in Project Maven, a Pentagon program that used machine‑learning to analyze drone footage. After sustained internal backlash, Google let that contract expire and published a set of AI Principles that pledged not to develop technologies designed to cause harm or enable surveillance that violated international norms.
Cross‑Company Solidarity
Notably, the letter includes signatories from rival firms, underscoring a rare moment of cooperation across competitive boundaries. The workers argue that AI’s growing power makes decisions about its use too consequential to be treated as routine business agreements.
Potential Impact
While the letter’s immediate effect on corporate decisions remains uncertain, it provides a clear, documented expression of employee concern that companies cannot easily ignore. The signatories hope that their collective voice will shape future policies regarding AI’s role in defense and preserve the ethical standards outlined in earlier AI principles.