Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Recently, NVIDIA made an interesting move in the inference market. They acquired Groq's chip business for $200 billion, bringing in the key team led by Jonathan Ross. The curious thing is that Groq continues to operate as an independent company, so it's not a full acquisition.
What caught my attention was Huang Renxun's explanation of why they did this. It turns out that the inference market is not monolithic. Previously, everything revolved around squeezing more performance, period. But now things have changed. Users are willing to pay different prices depending on response speed. If an engineer can process tokens faster and be more productive, they are willing to invest in that.
That's where Groq comes in. Its LPU architecture is known for low deterministic latencies, the opposite of what NVIDIA does with its high-performance GPUs. It's as if they are completing a spectrum: on one side, maximum performance; on the other, maximum response speed. Two market segments, two different prices, same model.
At the March GTC conference, they launched the Groq 3 LPU with Samsung's 4 nm process. The numbers are impressive: 35 times more inference efficiency per megawatt compared to the Blackwell NVL72. It's the kind of differentiation that opens new markets instead of just competing in the existing ones.
Groq's move here is clear: while NVIDIA dominates high performance, they specialize in what users who value speed above all need. Two strategies, a more complete ecosystem.