Full speed ahead into the AI reasoning era, NVIDIA / MicroVision holography makes a major push to kick off an unprecedented explosion!

robot
Abstract generation in progress

According to reports, recently, as a global artificial intelligence (AI) chip giant, Nvidia (NVDA.US) released an “better-than-expected” earnings report. In the fourth quarter of fiscal 2026 and for the full fiscal year 2026, the company’s data center business reached new highs again, and its guidance for the upcoming quarter also exceeded market expectations.

Impressive Results in the New Earnings Report

In fiscal 2026, Nvidia achieved an annual revenue of $215.9 billion, an increase of 65% year over year; as Nvidia’s core business, the data center segment continued to grow strongly. The revenue for the fourth quarter reached a record $62.3 billion, up 75% year over year. The full-year revenue was $193.7 billion, contributing nearly 90% of the company’s total revenue.

Nvidia founder and CEO Jensen Huang has repeatedly emphasized that Nvidia is not just selling chips, but is an AI factory. As Nvidia’s core product, the Blackwell architecture platform is progressing smoothly; the next-generation Rubin platform is planned to be mass-produced and shipped in the second half of 2026.

According to the report, compared to the Blackwell platform, the Rubin mixture-of-experts (MoE) training models require 75% fewer GPUs; inference token costs are reduced by up to 10 times, precisely positioning for the low-cost demands of the inference era.

Aggressively Entering the AI Inference Era

Most notably, Nvidia is heavily investing in AI inference. In December last year, the company spent $20 billion in cash to acquire Groq’s low-latency inference technology and related engineering teams. This non-traditional acquisition—featuring exclusive technology licensing and talent transfer—became Nvidia’s largest deal in terms of scale in its history.

With technologies like Groq, Nvidia can respond more flexibly to competition from Google’s TPU, Amazon’s in-house chips, and various ASIC chips by offering a more versatile product lineup. Jensen Huang’s latest “sneak peek” revealed that Groq-related technologies will be integrated into Nvidia’s new architecture, further enhancing the performance and cost-effectiveness of AI infrastructure.

MicroVision Holistic Holography Connects the Entire Inference Industry Chain

Meanwhile, it is understood that MicroVision Holistic Holography (WIMI.US), a leading AI frontier company, has become a representative enterprise for full-stack deployment. It has been deeply engaged in the AI field, building high-end computing power bases such as holographic cloud platforms. The company adopts multi-heterogeneous architectures to integrate internationally advanced chips, accelerating the construction of generative AI chip clusters. It combines CPU or GPU technologies to develop distinctive computing solutions that meet low-latency requirements for large-model inference, embodied intelligence, and multimodal scenarios.

Furthermore, considering current trends in AI application scenarios and the increasing value of AI, MicroVision Holistic Holography is integrating AI chips with brain-computer interfaces and holographic vision technologies, covering segments such as “AI+ marketing,” “AI+ enterprise services,” “AI+ programming,” and “AI+ entertainment.” Additionally, in the investment opportunities within the AI agent industry chain, it focuses on key areas such as AI agents, cloud services, and computing power, expanding the application scope of inference technologies and promoting their deployment across more sub-scenarios.

Conclusion

It can be said that although capital markets continue to worry about an AI bubble, global cloud service providers are steadily increasing their AI infrastructure investments with real capital, providing ongoing support for Nvidia—the “shoveler” of the AI era. Regarding the competition for computing power in the AI inference era, leading international companies are leveraging their technological accumulation, ecosystem advantages, and scale effects to accelerate the deployment of AI computing products and ecosystem positioning. With breakthroughs in both full-stack deployment and vertical sectors, AI demand is expected to grow exponentially.

During the Spring Festival, news about the computing power industry chain has been emerging constantly. OpenAI’s “significant reduction” in computing power investment has attracted widespread attention and discussion; Meta and Nvidia have reached a multi-billion-dollar chip procurement agreement; the rise of Taalas chips has also garnered much attention…

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin