Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
NVIDIA(NVDA.US) Earnings Are Coming: Can the "AI Computing Power Bull Market Narrative" Break Through the "AI Bubble"?
With the “AI chip king” NVIDIA (NVDA.US), known as “the most important stock on Earth,” set to release quarterly earnings after the U.S. stock market closes on Wednesday (Thursday morning Beijing time), a “stress test” on AI computing power investments will follow. Global investors focused on AI infrastructure and the broader AI boom are seeking evidence that this world’s most valuable chip giant’s profits are closely aligned with the massive AI capital expenditure budgets of the four U.S. hyperscalers, which total between $650 billion and $700 billion, and are expected to grow strongly in tandem.
Meanwhile, recent moves by hyperscalers to announce plans to launch more cost-effective AI ASIC chips based on self-developed models also indicate potential risks to NVIDIA’s long-standing dominance in the core area of global AI infrastructure—AI chips.
After driving the U.S. stock market into a super bull run over the past three years, NVIDIA’s stock, which holds a high weight in the Nasdaq 100 and S&P 500 indices, has only risen about 2% so far in 2026. This sluggish performance is mainly due to a series of “AI doomsday narratives” triggered by Anthropic’s AI proxy products, which have heavily impacted software stocks and high-valuation tech giants. Additionally, large cloud providers are accelerating their in-house development of more cost-effective AI ASICs (such as TPUs) and promoting multi-vendor strategies, along with increased competition from AMD and others. The chart below compares NVIDIA’s 2026 stock performance with the MAGS ETF (focused on the top seven U.S. tech giants) and the S&P 500.
Alongside strong competitors like AMD (AMD.US), which plans to release a new flagship AI server cluster later this year, Alphabet’s Google has emerged as a formidable rival to NVIDIA in AI infrastructure—specifically in AI chips—through a deal to provide Anthropic, the developer of the Claude chatbot, with its extensive self-developed TPU AI compute clusters (belonging to the AI ASIC technology route). Media reports also indicate that Google is in talks with Meta (META.US), Facebook’s parent company, to supply TPU-based AI infrastructure to one of NVIDIA’s largest clients.
Furthermore, NVIDIA’s earnings are highly “event-driven,” with options markets implying about ±5% expected stock price volatility after earnings. With a market cap of approximately $4.7 trillion, this corresponds to a single-event price swing of about $226 billion. Considering its roughly 7.8% weight in the S&P 500, this alone could trigger significant market fluctuations.
After the sharp rise in the U.S. stock market since the start of 2023’s super bull run, the seven largest U.S. tech giants—including Google, Microsoft, Amazon, and NVIDIA—have experienced continued stock volatility this year. Investors are questioning whether the massive ongoing investments in AI infrastructure (expected to exceed $700 billion this year, a 60% increase) will generate sufficient returns to justify their high valuations.
The so-called “Magnificent Seven” (Mag 7)—Apple, Microsoft, Google, Tesla, NVIDIA, Amazon, and Meta Platforms—hold a significant weight (about 35%-40%) in the S&P 500 and Nasdaq 100 indices. They are the main drivers behind the indices’ record highs and are viewed by top Wall Street institutions as the most capable of delivering substantial returns amid the largest technological upheaval since the internet era.
NVIDIA Aims to Seize the AI Inference Wave
To defend its near-monopoly position in AI infrastructure and capitalize on the AI inference boom, NVIDIA announced a $20 billion deal late last year to license chip technology from AI startup Groq. Analysts believe this move will strengthen NVIDIA’s leadership in the rapidly growing inference market—referring to the real-time, high-efficiency response of large AI models after training, and the rapid execution of complex AI workflows. Last week, NVIDIA also agreed to sell millions of AI chips to Meta, though the deal’s financial details were not disclosed.
As the biggest winner in the AI boom, NVIDIA has also raised concerns about the sustainability of its AI compute infrastructure spending, especially with its potential $100 billion investment in OpenAI, one of its largest clients. Media reports suggest the company plans to replace its previous $100 billion commitment with a smaller $30 billion investment.
NVIDIA’s AI GPU dominance in training requires more versatile AI compute clusters and rapid iteration of the entire system, while inference emphasizes cost per token, latency, and energy efficiency after the deployment of cutting-edge AI technologies. For example, Google’s Ironwood TPU is positioned as a “generation born for AI inference,” emphasizing performance, energy efficiency, and scalability.
The deal with Groq involves non-exclusive licensing of inference AI chip technology, along with the integration of Groq’s founder and CEO Jonathan Ross and key engineering teams into NVIDIA. Some semiconductor analysts highlight Groq’s focus on inference-specific chips using on-chip SRAM to reduce data movement bottlenecks—directly targeting the cost and latency issues during inference.
This recent $20 billion non-exclusive inference chip licensing agreement with Groq, which grants NVIDIA access to Groq’s inference technology and will see Groq’s founders and core R&D team join NVIDIA, underscores NVIDIA’s strategy to maintain its roughly 80% market share in AI chips amid intensifying competition from Google’s TPU clusters and other players. NVIDIA aims to secure full-stack AI dominance through “multi-architecture AI compute, strengthening the CUDA ecosystem, and recruiting more AI chip design talent.”
The “Stress Test” of AI Compute Investment
The market is eager to see whether NVIDIA’s profits and revenue growth can continue to meet or exceed “super expectations” amid the roughly $650 billion to $700 billion in AI-related capital expenditures by tech giants. Investors also expect NVIDIA’s guidance to significantly surpass Wall Street forecasts.
“This quarter’s earnings are especially important because there’s widespread concern about the outlook for AI infrastructure spending—whether we’re in an AI bubble,” said Ivana Delevska, Chief Investment Officer at Spear Invest, which holds NVIDIA stock via an ETF. “Proving that profit growth hasn’t truly slowed will be very important.”
According to Wall Street analyst estimates compiled by LSEG, analysts expect NVIDIA’s quarterly profit for the period ending January to surge over 62% year-over-year, though this would be a slowdown from the 65.3% growth in the previous quarter due to tougher year-over-year comparisons.
NVIDIA’s revenue for FY2026 Q4 is projected to jump over 68% to $66.16 billion. Analysts forecast that management will guide FY2027 Q1 revenue to grow another 64.4% to $72.46 billion. Notably, NVIDIA has exceeded analyst revenue estimates for 13 consecutive quarters, but the margin of beat has narrowed as its market cap hit an unprecedented $5 trillion and surged over three years, leading analysts to adopt more conservative growth expectations.
RBC’s equity analysts expect the company to provide at least a 3% higher-than-consensus revenue guidance for the quarter ending in April. Delevska from Spear Invest, a long-term NVIDIA bull, predicts the company might issue a revenue forecast up to $100 billion above expectations, exceeding consensus by over 13%.
A recent report from Bank of America states that the global AI arms race remains in an “early to mid-stage,” while Vanguard, one of the world’s largest asset managers, noted that the AI investment cycle may have only reached 30%-40% of its peak. The report also warns that the risk of a correction in large tech stocks is increasing.
Major Wall Street firms like Morgan Stanley, Citigroup, Loop Capital, and Wedbush believe that the global AI infrastructure investment wave centered on hardware is far from over. They see this as just the beginning, with the “AI inference compute demand storm” expected to drive a global AI infrastructure investment boom worth up to $3 trillion to $4 trillion through 2030.
Demand for DRAM/NAND storage chips remains strong, with prices for products like DDR4/DDR5 and enterprise SSDs expanding rapidly. The AI surge has driven unprecedented demand for storage chips and underscored their importance in training and inference systems. The current exponential growth in global AI compute needs far outpaces supply, as evidenced by the strong earnings reports from TSMC and ASML, the world’s leading chip manufacturer and lithography equipment maker, respectively.
For the relatively sideways but internally volatile U.S. stock market, NVIDIA’s earnings are not only about confirming whether NVIDIA’s growth trend remains robust but also about testing whether the entire “AI capex—profit realization—valuation discount” chain still holds and whether it can dispel concerns about an “AI bubble.” Since 2026, the S&P 500 has only slightly increased (about 0.2% year-to-date), but there’s significant divergence beneath the surface. Software and services sectors, worried about AI disruption, have underperformed; macro surveys also show market uncertainty about 2026’s trajectory, with concerns over trade tensions and AI infrastructure spending, and valuations (forward P/E around 21.6x) remain sensitive. Therefore, NVIDIA’s management comments on order visibility, AI investment payback cycles (capex ROI), and industry competition are seen as risk anchors for the high-beta AI infrastructure ecosystem—including cloud providers, supply chains, data center power, and AI software.
NVIDIA’s Still No. 1 Earnings Report Could Trigger Market Turbulence
Statistics show that NVIDIA’s earnings, as the highest market cap company in the U.S. and globally, could cause significant stock market turbulence. Options markets imply about ±5% stock price volatility post-earnings, with a market cap of approximately $4.7 trillion corresponding to a single-event price swing of about $226 billion. Considering its roughly 7.8% weight in the S&P 500, this alone could trigger major market swings.
If NVIDIA’s earnings or guidance fall short—such as only slightly exceeding expectations or providing weak guidance—it could trigger a risk-off sentiment across the semiconductor, cloud, and software sectors, with short-term volatility (VIX) likely to spike (recent weeks’ VIX around 20 suggests increased demand for options protection). Conversely, if revenue and guidance significantly beat expectations and reinforce the “AI compute bull market” narrative, risk appetite could quickly recover, and volatility might decline.
Analysts still expect strong demand for NVIDIA’s high-priced AI chips, which serve as the “brains” for processing massive AI workloads and will dominate this year’s huge capital expenditures by tech giants on expanding or building new AI data centers.
NVIDIA executives have also hinted at discussions with major clients about data center orders for next year, prompting several Wall Street analysts to forecast that the company will update its backlog of AI compute infrastructure orders, which first disclosed a $500 billion figure in October 2025, for the period 2025–2026.
However, the biggest constraint for NVIDIA’s growth may be capacity bottlenecks in the chip supply chain—limiting AI chip shipments as NVIDIA and competitors fiercely compete for capacity on TSMC’s (TSM.US) 3nm process lines.
Jay Goldberg of Seaport Research Partners wrote in a report: “We believe NVIDIA will easily meet expectations, but given TSMC’s capacity constraints, it’s unlikely they will deliver larger upside.”
Still, NVIDIA’s sales of AI chips to China may rebound significantly—previously limited by U.S. export restrictions—potentially boosting revenue and profit forecasts.
NVIDIA CEO Jensen Huang recently stated that he hopes China will allow the company to sell its high-performance H200 AI chips locally, with sales licenses reportedly close to finalization.
Competitor AMD (AMD.US), after obtaining licenses to ship some revised data center CPUs and GPUs to China, has re-included high-performance AI chips in its quarterly outlook for the current quarter.
NVIDIA expects a 75% adjusted gross margin in Q4, up more than 1 percentage point year-over-year. Analysts generally believe the company will not be hurt by a sharp global storage chip shortage. They note that NVIDIA’s pricing power and likely locking in high-bandwidth memory (HBM) allocations for the full year and into 2027 will shield it from the soaring memory prices.