Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Did you notice NVIDIA's very interesting strategy? They acquired Groq's inference chip business for $200 billion, and now it makes a lot more sense why they did that.
What caught my attention was Huang Renxun's explanation of the logic behind this acquisition. Basically, the inference market is becoming segmented. Previously, everyone was focused on one thing: increasing throughput. But then, the commercial value of tokens changed significantly, and different users are willing to pay different prices depending on response speed.
It's like this: if I can provide engineers with faster responses, allowing them to work more efficiently, they will be willing to pay more for that. And this demand for low latency is relatively new in the market.
Then comes Groq. Their LPU architecture is known precisely for its low deterministic latency, which perfectly complements NVIDIA's high-throughput GPU approach. When they launched the Groq 3 LPU in 4nm, they showed that inference capacity per megawatt in trillion-parameter models is 35 times higher than the Blackwell NVL72. That's no small feat.
In other words, NVIDIA filled an important gap in its product line. Now they cover both the high-throughput segment and the low-latency, high-value-per-unit segment. Pareto expansion, as some call it. Same model, different prices depending on response time. Lower throughput, but the unit price makes up for it.
This is the strategy: it's not competition, it's complementarity. And it makes a lot of sense considering how the AI market is evolving.