Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I just saw very important news about the acquisition Nvidia made. The company acquired Groq’s inference chip division for $20 billion last December, and this move is starting to make more sense now.
Huang Renxun, CEO of Nvidia, explained in a recent interview the real reason behind this strategic decision. It turns out this isn’t just about a normal productivity boost—Nvidia is targeting a completely new market: low-latency, high-value inference. This market has only recently started to take shape as users begin paying different prices depending on response speed.
The idea is simple but powerful: if you can offer tokens to developers with lower latency, meaning they can work more efficiently, they’ll pay a higher price. Huang described it as expanding the market’s boundaries—adding an entirely new sector instead of focusing only on high productivity.
The first product after the acquisition appeared in March: Groq 3 LPU, made with Samsung’s 4-nanometer technology. The numbers are impressive: inference throughput per megawatt on trillion-parameter models reaches 35 times that of Blackwell NVL72. Groq’s architecture is known for its low, predictable latency—which is exactly what Nvidia’s product lineup was missing.
This is incredibly smart: the same model, but with different prices depending on response time. Even if productivity is lower, the higher price makes up for it. Groq filled the gap that was present in Nvidia’s strategy, and clearly the market has begun to split into different segments.