Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Just caught something interesting about where Nvidia's headed, and Jensen Huang basically just confirmed what a lot of people have been thinking about AI infrastructure spending.
So here's the thing - Nvidia's about to start shipping their Vera Rubin platform in the second half of this year, and this thing is genuinely a performance jump. We're talking 75% fewer GPUs needed for training compared to Blackwell, plus they're slashing inference token costs by 90%. That's not incremental progress, that's a meaningful shift in economics for every AI company running on their chips.
During their earnings call back in February, Jensen Huang dropped a comment that really puts things in perspective. When someone asked if their customers could actually sustain the massive capex they're throwing at data centers, Huang pointed out that the world used to spend around $400 billion annually on classical computing infrastructure. But AI workloads? He's saying we need roughly a thousand times more capacity. That's the kind of scale that makes you rethink everything.
Last year Jensen Huang estimated AI data center infrastructure spending could hit $4 trillion per year by 2030. That sounded wild when he said it, but honestly, if the computing requirements really are that massive, it starts to make sense. Especially once inference costs drop and more companies can actually afford to deploy these systems at scale.
Looking at the numbers, Nvidia pulled in $215.9 billion in revenue for fiscal 2026, up 65% year-over-year. Data center alone was $193.7 billion, up 68%. They're guiding for $78 billion in Q1 of fiscal 2027, which would be another 77% jump. The growth is genuinely accelerating, not slowing down.
What's wild is the valuation. Stock's trading at a P/E of 36.1 right now, which is actually 41% cheaper than their 10-year average of 61.6. Forward P/E is sitting at 21.5 on Wall Street's fiscal 2027 earnings estimate of $8.23 per share. That's cheaper than the S&P 500 at 24.7. For a company that Jensen Huang and management are basically saying is in the early innings of a trillion-dollar market expansion, that feels pretty reasonable to me.
The way I see it, Nvidia's competing with itself more than anyone else right now. Demand still outpaces supply, and with Vera Rubin coming online, that gap probably widens before it narrows. Interesting spot to be watching.