Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
I've noticed something interesting in NVIDIA's current strategy. Last week, Jensen Huang explained in detail why NVIDIA invested 20 billion dollars to acquire Groq, and honestly, it's a brilliant strategic decision that shows how the inference market is transforming.
So here's the context: for a long time, everyone focused on a single metric - throughput. But Groq understood something that others missed. Software engineers are now willing to pay more for faster responses. This is a completely new market segmentation. As Huang said, if we can offer tokens with ultra-low latency, making developers more productive, they will pay for it. This is a market that is just beginning to emerge.
And that's where Groq comes into play. This acquisition fills a major gap in NVIDIA's inference arsenal. While NVIDIA dominates the high-throughput segment with its traditional solutions, Groq brings something completely different: a proven LPU architecture known for its exceptionally low deterministic latency. In March at GTC, NVIDIA showcased the Groq 3 LPU, fabricated in 4 nm at Samsung. The numbers are impressive - 35 times more inference per megawatt on models with 1 trillion parameters compared to Blackwell NVL72.
It's essentially an extension of the Pareto curve in the market. Instead of choosing between high throughput or low latency, NVIDIA is now creating two distinct segments. Groq continues to operate as an independent entity, with Jonathan Ross and his team joining NVIDIA. The model itself can be priced differently depending on response time - less throughput, but the unit price more than compensates. It's pure business genius, and it shows how the AI market is becoming more sophisticated. Both approaches will coexist, and customers will choose based on their actual needs.