Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Hundreds at Google push leadership to drop Pentagon AI tie-up
More than 600 Google (GOOGL, GOOG) employees told Sundar Pichai on Monday to keep the Pentagon away from Google’s AI for classified work.
The letter came from workers inside DeepMind and Google Cloud, who pointed to a report from The Information that said Google and the US Department of Defense were in talks over using Gemini inside classified systems.
Google employees outline numerous reasons to block Gemini from secret Pentagon work
The letter started with a direct warning to Sundar, who the workers are telling, “We are Google employees who are deeply concerned about ongoing negotiations between Google and the US Department of Defense. As people working on AI, we know that these systems can centralize power and that they do make mistakes.”
The employees said they believe their closeness to the technology gives them “a responsibility to highlight and prevent its most unethical and dangerous uses.”
“We ask you to refuse to make our AI systems available for classified workloads. Otherwise, such uses may occur without our knowledge or the power to stop them,” said Google employees. The letter then explained that they want AI to help people, not support harm. They named lethal autonomous weapons and mass surveillance as core fears. They also said the risks go beyond those two areas, because classified work can hide what is really happening.
Their argument was if Google accepts secret military workloads, workers may have no way to check the use, question it, or stop it, and the only real guarantee is to reject classified work before the deal is done.
They also warned that the wrong choice could damage Google’s name, business, and place in the world. The employees said their own safety and critical infrastructure face active threats, while lives and civil rights are already at risk from bad use of technology built by people like them.
Trump’s Pentagon is forcing its way into massive AI access for US military
Pentagon leaders have said the military must be free to use commercial AI for “all lawful uses.” Officials say that phrase gives the government room to use the technology in different cases while staying within US law and military rules.
AI workers do not see that as enough protection. Their concern has grown because of Trump’s own language and actions. Earlier this month, President Donald Trump threatened to bomb “every” bridge and power plant in Iran. Experts told The Post that such an attack would break international law. His administration’s strikes on boats it claims were carrying drugs have also been challenged by international law experts.
The Google letter landed while other AI firms are already tangled in Pentagon fights. Anthropic, the private company behind Claude, had its technology placed inside US military systems last year. Those tools helped sort data and identify possible targets.
Then the Pentagon cut Anthropic off from all Defense Department work in February. The company had tried to add contract terms saying its AI could not be used for mass surveillance or lethal autonomous weapons.
Anthropic and the government are now in court over whether that cutoff was legal. That case has pulled more attention toward Google and OpenAI, because both work with the US military.
OpenAI is private, so there is no stock ticker. It signed a deal in February to provide AI for classified workloads soon after Anthropic was removed. Sam Altman, OpenAI’s chief executive, has said he is confident the government contract blocks use for US mass surveillance and lethal autonomous weapons.
Google has been here before. In 2018, workers protested a Pentagon project that used Google AI to identify objects in drone footage. Hundreds signed a petition against that work, and Google later chose not to renew the deal.
After that fight, Google created a pledge saying its AI technology would not be used for weapons or surveillance. But the company has spent recent years looking for more military contracts. Last year, Google dropped those limits. In December, it signed a deal that lets the Defense Department use Gemini.
If you want a calmer entry point into DeFi crypto without the usual hype, start with this free video.