Nvidia that "rose too high"… the market has not fully recognized its transition from GPUs to AI platforms

robot
Abstract generation in progress

As the market interprets NVIDIA’s (NVIDIA) market capitalization as “already too high,” a truly more important change is happening elsewhere. Analysis suggests that this round of market action is not simply a semiconductor boom, but a structural shift— the core axis of enterprise computing is moving from “servers and personal computers” to “AI factories.”

In a recent analysis, theCUBE research firm said that NVIDIA is no longer just a graphics processing unit supplier; it is evolving into a “platform operator” that is reconstructing the entire foundation of enterprise computing. The analysis explains that if, in the past, x86 servers supported enterprise computing systems, then in the future AI factories integrating power, data, computation, and software will produce “tokens,” perform inference, and automate workflows—becoming the new fundamental unit.

The key is that enterprises do not actually operate on a perfect “deterministic” system. Enterprise resource planning, customer relationship management, finance, human resources, security, and logistics systems are all distributed across different data and rules, and the gaps between them are filled by human judgment, exception handling, and manual recovery. The report argues that AI factories are not only about improving computing speed, but also moving toward automating these “connection costs” that have, so far, been borne by humans.

Disconnection Between Semiconductor Stock Trends and NVIDIA’s Performance Strength

Judging purely by this year’s semiconductor stock performance, the market has instead placed much higher expectations on latecomers. The report shows that Intel is up by about 200% year-to-date, AMD is up 91%, and NVIDIA is up only about 13%. However, the outlook for fundamentals is exactly the opposite. NVIDIA’s revenue scale is far larger than that of its peers, its growth rate is also faster, and its free cash flow likewise holds an overwhelming advantage. Despite this, its forward price-to-earnings (P/E) ratio is lower than that of competitors except for Qualcomm.

Market chatter suggests that NVIDIA is already large enough, and that competitors such as AMD, Intel, Google’s tensor processors, Amazon Web Services’ Trainium, Broadcom, and others could shake its moat. However, the report believes this interpretation is more like pricing “concerns” in advance rather than reflecting actual changes in market share.

The main point is simple. NVIDIA’s advantage does not lie in market share itself, but in the “flywheel effect” created by that share. The higher the sales volume, the faster it can reinvest; the greater the ecosystem loyalty, the stronger its ability to secure the supply chain. The report’s view is that, because this structural support underpins its annual product innovation cycle, NVIDIA can not only defend its share in the accelerated computing market, and may even increase it further.

“Token Economy” Becomes the New Standard… the Market Is Far Bigger Than the CPU Replacement Cycle

The economic logic of this transformation differs from past server replacement cycles. In the past, when CPU performance improved, enterprises would replace their equipment every few years. In the AI era, under power constraints, how to produce more tokens at lower cost has become the standard for value. If electricity is essentially fixed, then under the same amount of power, the more inference and automation tasks it can handle, the stronger the profitability.

The report forecasts that NVIDIA’s revenue will rapidly rise from $60.9 billion in fiscal 2024, to $130.5 billion in fiscal 2025, and to $215.9 billion in fiscal 2026. Using an exchange rate of 1 USD = 1,465.50 KRW, this corresponds to approximately 89.24 trillion KRW, 191.25 trillion KRW, and 316.37 trillion KRW, respectively. Market consensus expects its 2027 revenue to exceed $350 billion, and some forecasts even suggest it could reach above $370 billion.

The reason the market grows so strongly is that AI infrastructure is no longer just a pure IT cost—it has transformed into a “system that generates revenue.” Tokens are, in essence, outputs of inference and automation, and they determine productivity across multiple business areas such as customer service, development, logistics, inventory, risk management, and security. The report notes that in AI-native enterprises, there are already cases where per-employee revenue is about 10 times that of traditional companies.

x86 Will Not Disappear… Instead, It Will Be “Absorbed” by NVIDIA’s Platform

The most striking part of this analysis is not the decline of x86, but “absorption.” An enterprise’s core data and applications still remain in the x86 environment. Therefore, a full replacement is not realistic. The analysis indicates that a more likely approach is to keep deterministic business systems, while layering an AI factory hierarchy externally.

The report specifically points out that NVIDIA’s collaboration with Intel could become a core pathway for this transition. For Intel, it can maintain visibility in the AI era and obtain cash; for NVIDIA, it provides advantages in accessing a massive x86 installed base. For enterprise customers, the benefit is that they can migrate to AI infrastructure without having to completely dismantle and replace their existing systems.

The report believes that, in this process, the often-discussed debate over CPU-to-GPU ratios has also been exaggerated. With current CPU utilization relatively low, more important than the simple ratio is how much the utilization of the entire platform can be increased. This means that, compared with competition based on the number of parts, the ability to design integrated architectures is more likely to decide the outcome.

NVIDIA’s True Weapon Is Not Chips, But the “Full Stack”

The report emphasizes that NVIDIA has moved beyond being a “chip company” and is building a full-stack platform. NVIDIA’s moat begins with the CUDA software ecosystem and is reinforced through DGX integrated systems, Mellanox networking, Grace-Hopper CPU-GPU integration, Spectrum-X networking, Blackwell, Mission Control, Omniverse, Rubin, and the future Feynman annual roadmap.

In particular, the acquisition of Mellanox is seen as a watershed moment in NVIDIA’s growth. AI factories need to connect hundreds of thousands of GPUs like a single system, and at that point the bottleneck is more likely to be in the network rather than the chips. NVIDIA is transforming networks from mere connectivity tools into the “structure” of computation itself through NVLink, InfiniBand, Spectrum-X, and BlueField DPU.

In this architecture, the computing unit is no longer a server, but a “rack.” By optimizing GPUs, CPUs, DPUs, memory, networking, storage, cooling, and operational software as an integrated system to reduce the cost per token for each generation, this approach is fundamentally different from the era when customers assembled components separately. The report assesses this as a key factor that differentiates NVIDIA from general semiconductor companies.

Storage, Databases, Recovery… the AI Era Is Rewriting Everything

The shift of AI factories is not only a change in computing equipment. Storage is moving from an “add-on device” to “contextual memory,” and data platforms are shifting from querying past warehouses to real-time semantic centers. Although enterprise data warehouses or lakehouse architectures are used for analytics…

TP AI Notes and Precautions: Using TokenPost.ai’s foundation language model, the article was summarized. The main content may be omitted or may not match the facts.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin