More than 600 Google employees have publicly urged the company’s leadership to walk away from a classified contract with the US Department of Defense, warning that the technology could be used for mass surveillance or lethal autonomous weapons. The open letter, addressed to CEO Sundar Pichai and made public on Monday, reflects growing unease among tech workers about the militarisation of artificial intelligence.
“We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways,” the letter states. “This includes lethal autonomous weapons and mass surveillance, but extends beyond.” The signatories include more than 20 directors, senior directors, and vice presidents from across Google DeepMind, Cloud, and other divisions.
The protest centres on negotiations between Google and the Pentagon over the potential deployment of the Gemini AI model in classified settings. Employees argue that the secrecy inherent in such work makes it impossible to ensure the technology is not misused. “Classified workloads are by definition opaque,” one organising employee said. “Right now, there’s no way to ensure that our tools wouldn’t be leveraged to cause terrible harms or erode civil liberties away from public scrutiny. We’re talking about things like profiling individuals or targeting innocent civilians.”
Broader Industry Tensions
The dispute is part of a wider pattern of friction between technology companies and military clients. In a separate case, AI startup Anthropic sued the US Department of Defense after being labelled a “supply-chain risk” for refusing to allow its systems to be used for mass surveillance or autonomous warfare. Anthropic CEO Dario Amodei said he “cannot in good conscience accede to the Pentagon’s request” for unrestricted access. “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei wrote. “Some uses are also simply outside the bounds of what today's technology can safely and reliably do.” In response, US President Donald Trump ordered government departments to stop using Anthropic’s Claude chatbot.
According to the letter organisers, Google has proposed contractual language that would prohibit Gemini from being used for domestic mass surveillance or autonomous weapons without appropriate human control. The Pentagon, however, has pushed for broader “all lawful uses” wording, arguing it is necessary to maintain operational flexibility. Employees say such safeguards would be difficult to enforce in practice, citing existing Pentagon policies that limit external oversight of its AI systems.
The current protest echoes a 2018 employee revolt that forced Google to withdraw from Project Maven, a Pentagon initiative that used AI to analyse drone footage. “We believe that Google should not be in the business of war,” the letter concluded.
For European readers, the debate raises questions about the continent’s own approach to military AI. As Europe’s military conscription revival reshapes defence structures, and as Germany unveils its first-ever military strategy emphasising speed and deep strikes, the ethical boundaries of AI in warfare remain a live issue. The European Union is currently negotiating the AI Act, which includes provisions for high-risk applications, but military uses are largely exempted. The Google protest underscores the difficulty of reconciling commercial AI development with democratic accountability—a challenge that European policymakers will increasingly face.


