Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
So I've been digging into Nvidia's latest earnings call from late February, and there's something pretty interesting emerging here that most people might be glossing over.
First, the Vera Rubin platform. This isn't just another incremental GPU refresh. We're talking about a system that cuts GPU requirements by 75% compared to their current Blackwell chips, while slashing inference token costs by 90%. For context, that's the kind of efficiency jump that actually changes economics for AI companies. When you can reduce what it costs to generate outputs that dramatically, it unlocks a whole new layer of demand.
Nvidia's shipping samples now, with mass production ramping in the second half of this year. Their CFO basically said every major cloud provider will deploy this thing. That's the kind of confidence you don't hear often.
But here's where it gets really interesting. During the earnings call, Jensen Huang made this observation about long-term capacity needs. He pointed out that historically, the world spent around $400 billion annually on classical computing infrastructure. His take? AI workloads require roughly a thousand times more capacity than that. He's previously estimated AI infrastructure spending could hit $4 trillion per year by 2030.
Now, that's an ambitious number. But if the math checks out on computing requirements, and if those falling inference costs drive adoption like we'd expect, it might actually be conservative.
On valuation, here's what caught my attention. Nvidia's trading at a P/E of 36.1 right now. Sounds reasonable until you compare it to their 10-year average P/E ratio of 61.6. That's a 41% discount to their historical average P/E ratio. Wall Street's consensus for fiscal 2027 earnings is $8.23 per share, which puts the forward P/E at 21.5.
For perspective, the S&P 500 is trading around 24.7 on a trailing basis. So if Nvidia just treads water, it could actually become cheaper than the broader market on a relative basis.
The thing is, given the scale of the opportunity Huang's describing, I don't think the stock sits at these levels for long. If that fiscal 2027 estimate hits and the average P/E ratio reverts anywhere close to historical norms, you're looking at significant upside from here.
Obviously this isn't investment advice, but the setup looks pretty compelling for patient capital. The infrastructure buildout for AI is still in early innings, and Nvidia's essentially competing with itself at this point. Worth keeping on your radar.