Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Just caught something pretty interesting from Nvidia's latest earnings call. Jensen Huang basically just dropped a bombshell about the company's next-gen Vera Rubin platform, and it's shaping up to be a game-changer for the entire AI infrastructure space.
So here's what's happening. Nvidia's shipping samples now, but full-scale production kicks off in the second half of 2026. The Vera Rubin architecture is legitimately impressive — it can train AI models using 75% fewer GPUs compared to the current Blackwell generation, and it cuts inference token costs by 90%. For context, that's massive because every token an AI model generates costs money. Lower costs mean companies will push usage harder, which drives more revenue for AI platforms. No wonder Nvidia's CFO basically said every cloud provider is going to deploy this thing.
But here's the part that caught my attention. During the earnings call, Huang made this wild claim about computing capacity. He said the world has historically spent around $400 billion annually on classical computing infrastructure. Then he dropped that AI workloads require roughly a thousand times more capacity. Last year he estimated AI data center spending could hit $4 trillion annually by 2030. That's not just ambitious — it suggests we're still in the early innings of this infrastructure build-out.
The numbers back this up. Nvidia just reported $215.9 billion in revenue for fiscal 2026, up 65% year-over-year, with data center sales accounting for $193.7 billion. Management expects first-quarter revenue to hit $78 billion, which would be 77% higher than last year. When you look at a CEO like Jensen Huang steering this kind of growth trajectory, you start to understand why his net worth and the company's valuation keep climbing.
What's wild is the valuation. Nvidia's trading at a P/E ratio of 36.1 right now, which is actually 41% cheaper than its 10-year average of 61.6. Wall Street expects fiscal 2027 earnings to reach $8.23 per share, which puts the forward P/E at just 21.5. The S&P 500 trades at 24.7, so if those earnings estimates hit, Nvidia could actually be cheaper than the broader market. Even if the stock just trades to its historical average P/E, we're talking potential upside of 186% over the next 12 months.
Obviously I'm not making investment recommendations here, but the opportunity Nvidia's sitting on seems legitimately enormous. The combination of next-gen hardware, explosive demand, and reasonable valuation is pretty rare to see all at once.