DeepMind Staff Vote to Unionize Amid Google’s Pentagon AI Deal

Key Points
- DeepMind employees in the UK voted to join the Communication Workers Union and Unite the Union.
- The vote was driven by the announcement of a Pentagon contract that permits "any lawful use" of Google AI.
- Workers fear their technology could be used in military operations, including support for the Israel Defense Forces.
- The union demands an independent ethics oversight board and the right to refuse harmful projects.
- Google’s Gemini AI and other tools were developed by the unified AI team that includes DeepMind staff.
- The Pentagon deal bars use for domestic mass surveillance or autonomous weapons without human oversight, but gives no veto power to Google.
- Google has not yet commented on the union’s request for recognition and ethical safeguards.
Around 1,500 DeepMind employees in the United Kingdom have voted to join the Communication Workers Union and Unite the Union, sending a letter to Google demanding formal recognition. The union drive, sparked by the revelation of a Pentagon contract that would let the U.S. defense department use Google’s artificial‑intelligence tools, reflects growing unease among the researchers about military applications of their work, including ties to the Israeli Defense Forces. Workers are also pressing for an independent ethics board and the right to refuse projects they deem harmful.
DeepMind staff in the United Kingdom have formally voted to unionize, marking the first such move at the AI research arm of Google. The workers, numbering roughly 1,500, sent a letter to senior management asking that the company recognize the Communication Workers Union and Unite the Union as their collective bargaining representatives.
The union drive was catalyzed by news that Google, along with other leading AI firms, had signed a contract with the U.S. Defense Department. The Pentagon announced the agreement last week, granting the military "any lawful use" of the participating companies' AI technologies. While the deal, as reported by The Information, bars the use of these tools for domestic mass surveillance or autonomous weapons without appropriate human oversight, it leaves Google with no right to control or veto how the government deploys its AI.
DeepMind researchers said the prospect of their work being weaponized spurred the vote in April. They cited concerns about the U.S. government’s "capricious Iran war" and a reported feud with rival AI firm Anthropic as evidence that the Department of Defense may not be a responsible partner. Some employees also warned that the technology they helped build could be aiding the Israel Defense Forces, noting Google’s $1.2 billion cloud‑computing contract with Israel signed in 2021.
Workers demand ethical safeguards
The union’s demands go beyond formal recognition. Employees want Google to commit publicly to not develop technology whose primary purpose is to cause harm or injury. They are calling for an independent ethics oversight body that could review and approve—or block—projects on moral grounds. Additionally, staff want a formal right to refuse work on projects they deem unethical.
Google’s AI products, including the Gemini model, were created by a unified team that incorporates DeepMind talent. The unionization effort therefore directly challenges the company’s ability to continue supplying the Pentagon and other militaries with cutting‑edge AI without addressing the ethical concerns raised by its own engineers.
Industry observers note that the DeepMind vote could set a precedent for other tech workers grappling with the dual‑use nature of AI. As governments worldwide seek to harness artificial intelligence for national security, the balance between innovation and moral responsibility is increasingly under scrutiny.
Google has not yet responded publicly to the union’s letter. The next steps will likely involve negotiations over collective bargaining rights, ethical oversight mechanisms, and the future of the company’s defense‑related contracts.