Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Just caught something pretty significant from Jensen Huang on Nvidia's latest earnings call. The guy basically dropped a reality check about how much computing power AI actually needs, and it fundamentally changes how you should think about the company's runway.
So here's the setup - Nvidia's about to start shipping its Vera Rubin platform in the second half of 2026. This isn't just an incremental upgrade. We're talking about a GPU that trains AI models with 75% fewer chips compared to their current Blackwell generation, while cutting inference token costs by 90%. For context, tokens are literally what AI companies charge users for - every word a chatbot generates costs money. Slash that cost by 90% and you're suddenly looking at massive margin expansion for the entire AI industry.
But here's where Jensen Huang's recent comments get interesting. An analyst asked whether customers could actually sustain their massive capex spending on data centers. His response? The world historically spent around $400 billion annually on classical computing infrastructure. For AI workloads, he said we need roughly a thousand times more capacity. Let that sink in. He's previously estimated AI infrastructure spending could hit $4 trillion per year by 2030. That's not hyperbole - that's the scale of what's coming.
Looking at the actual numbers: Nvidia pulled in $215.9 billion in fiscal 2026 revenue, up 65% year-over-year. Data center revenue alone was $193.7 billion. Management's guiding for $78 billion in Q1 FY2027 - a 77% jump. Most people see this and think "okay, that's growth." But when you combine it with what Jensen Huang's saying about infrastructure needs, you realize we're still in the early innings.
On valuation, the stock's trading at 36.1x trailing earnings, which sounds expensive until you realize that's 41% below its 10-year average P/E of 61.6. Forward P/E is sitting at 21.5 based on Wall Street's consensus of $8.23 EPS for fiscal 2027. The S&P 500 trades at 24.7x. So if Nvidia just matched historical multiples, you'd be looking at substantial upside from here.
The real story isn't about quarterly beats - it's about Jensen Huang and the team recognizing that AI infrastructure spending is about to dwarf everything that came before it. Vera Rubin shipping this year is just the beginning of that transition.