Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Been diving into something that's been quietly reshaping how modern warfare actually works, and honestly it's pretty wild when you piece it all together.
So there's this operation called Epic Fury back in February 2026—Israel and the US basically ran what amounts to an AI stress test in a real combat zone against Iran. But here's what most people miss: this wasn't just about firepower. It was about compressing the entire kill chain—from sensor data to decision-making to actual strikes—into minutes or even seconds. Whoever cracks that compression wins the next round of geopolitical leverage.
What caught my attention is how openly the major tech companies have shifted their positions. OpenAI went from this whole ethical stance about staying away from military applications to suddenly landing what's probably the most sensitive defense contract of our time. They announced it around late February—deploying GPT models on classified networks for intelligence analysis, translation, combat simulations. The company says they're doing this within "red lines," but let's be real: those red lines just got a lot more flexible when you're talking about hundreds of millions in defense contracts.
Then there's Anthropic, which took the opposite path. They refused to budge on their principles, wouldn't agree to the Pentagon's demands on autonomous weapons and mass surveillance. Result? They got labeled a "supply chain risk"—a designation previously reserved for companies like Huawei. That's a chilling signal to the entire industry: stick to your ethics and watch your access to the defense budget disappear overnight.
But here's the thing nobody talks about enough: the real power in this equation isn't held by the model companies. It's held by Microsoft and Google. Without their cloud infrastructure, all those fancy AI models are just PowerPoint slides. Microsoft Azure basically became the operational backbone—Israeli military scaled up their machine learning operations by something like 64 times in a few months. Google's Project Nimbus has been providing cloud infrastructure worth over a billion dollars. These companies are absorbing the actual cash flow while the model providers take the reputational hit. Smart, if you think about it cynically.
What really disturbs me is the Israeli AI systems like Lavender. This thing analyzed behavioral patterns on basically every adult male in Gaza, assigned them a "suspected militant score," and identified tens of thousands of targets. Then Gospel automated the building targeting, and Where's Daddy optimized when to strike to maximize casualties. Human review? A few dozen seconds per target. This is what an algorithmic killing factory actually looks like when you remove friction from the decision-making process.
The scary part is how portable this logic is. The techniques they developed in Gaza could be applied anywhere—Tehran, Taipei, wherever. It's not about the specific geography; it's about the data pipeline and the cloud infrastructure that processes it.
From a market perspective, we're watching the emergence of what you could call an AI-Cloud-Defense complex. It's reshaping how investors should think about tech stocks. This isn't just about OpenAI or Microsoft as consumer-facing companies anymore. It's about who controls the infrastructure for the next generation of conflicts. The companies willing to compromise on ethics are getting rewarded with stable, counter-cyclical cash flow that insulates them from regular business cycles.
The real question people should be asking: before we outsource more kill chains to a handful of large model and cloud companies, do we still have time to figure out who's actually responsible when algorithmic recommendations become bombing coordinates? Because if you say nothing now, you're essentially betting that this complexity stays manageable. History suggests otherwise.