600 Google employees jointly protest: refuse Gemini access to Pentagon's classified military network

Over 600 Google employees (including more than 20 vice presidents and senior executives) jointly signed a letter to CEO Sundar Pichai, demanding that the company refuse to expand Gemini into the Pentagon’s classified networks.
Employees pointed out that in classified environments, external entities cannot control AI systems, and existing safeguards are technically unfeasible.
(Background: Y Combinator Startup Guide Interpretation: What are the future development trends of AI Agents?)
(Additional context: UC research on “AI Brain Fog” phenomenon: 14% of office workers are driven crazy by Agents and automation, with 40% considering resignation)

Table of Contents

Toggle

  • From Non-Confidential to Confidential
  • Structural Dilemmas in Classified Environments
  • The Silent Disappearance of Google AI Principles, and a Historical Comparison with Maven

Over 600 Google employees jointly signed a letter to CEO Sundar Pichai, including senior researchers from DeepMind and more than 20 vice presidents and senior executives, urging the company not to authorize Gemini’s entry into the Pentagon’s classified military networks.

From Non-Confidential to Confidential

Let’s first clarify the collaboration between both parties. At the end of 2022, Google, AWS, Microsoft, and Oracle jointly secured a core cloud procurement project for the U.S. Department of Defense, JWCC, with a total cap of $9 billion.

By December 2025, the Gemini-based GenAI.mil platform officially launched for non-confidential environments.

By March 2026, the Gemini AI agent had been deployed to all 3 million personnel of the Department of Defense.

All of these are still applications in non-confidential settings.

But now, the new agreement under negotiation aims to extend Gemini’s capabilities into “confidential” environments: that is, air-gapped closed networks physically isolated from external networks, specifically for classified military operations. In simple terms: Gemini will enter operational command centers.

The disagreements at the negotiation table are clear. Google is trying to draw red lines in the contract: prohibiting Gemini from being used for tracking U.S. citizens or making strike decisions without human intervention. But the Pentagon’s stance is “all legitimate uses,” leaving no explicit bans, and explicitly stipulating that external vendors shall not have any control over their AI systems.

These two conditions are directly conflicting. In the joint letter, Google employees pointed out that the safeguards proposed by the company “are technically impossible to implement.”

Structural Dilemmas in Classified Environments

The signatories of the joint letter highlight a fundamental issue: “Confidential work, by definition, is opaque.” This means that once Gemini enters a classified network, even Google cannot see what it is doing.

In non-confidential environments, Google can audit API calls, monitor model outputs, set protective barriers, and intervene when problems are detected. In classified environments, none of these are possible.

Employees listed specific concerns in the letter: profiling analysis (using AI to build behavioral and identity models of target individuals), and using innocent civilians as targets. These scenarios are not hypothetical but are existing modes of AI-assisted military operations.

The dilemma faced by current Google management is that they cannot technically guarantee to employees that Gemini will not be used for certain purposes, because it simply cannot access or verify what happens within those networks.

The Silent Disappearance of Google AI Principles, and a Historical Comparison with Maven

In February 2025, Google quietly amended its AI Principles, removing the explicit prohibition on “developing weapons or surveillance AI.” DeepMind CEO Demis Hassabis explained that global AI leadership competition was intensifying, and human rights organizations like Human Rights Watch and Amnesty International immediately condemned the change.

The 2018 Maven case saw 4,000 employees sign a petition, with at least 12 resignations, ultimately forcing Google to decline to renew the contract, which was then taken over by Palantir.

Google’s collaboration with the U.S. Department of Defense on the “Maven Project” in 2018

This project aimed to use Google’s AI and machine learning technology to automatically analyze aerial imagery captured by drones, assisting the U.S. military in target identification. However, it sparked fierce internal ethical protests, becoming a pivotal moment in the history of tech-military cooperation.

The success of Maven was partly because, in 2018, AI military applications were still a fringe issue, and the brand cost for Google exceeded the contract value. Eight years later, AI has become a core component of defense infrastructure, with Google, AWS, and Microsoft competing fiercely for major contracts.

Market position, political pressure, and competitive landscape all point in the same direction: a letter alone seems insufficient to counter the market interests behind these massive contracts.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin