The $230 billion valuation xAI "died" on May 6.

Author: Xiaojing, Tencent Technology

On the afternoon of May 6, U.S. time, Elon Musk announced on social network X: “xAI will no longer exist as an independent company; it will simply be SpaceXAI, which is SpaceX’s AI product.”

On the same day, SpaceX/SpaceXAI signed a compute leasing agreement with Anthropic: giving the exclusive use of all 220k NVIDIA GPUs at xAI’s most valuable asset, the Colossus 1 data center, to OpenAI’s strongest competitor, Anthropic.

Still on the same day, SpaceX submitted an application in Texas to build a semiconductor factory called “Terafab,” with an initial investment of $55 billion, and a total investment of up to $119 billion once completed.

A declaration of death for an AI company, a deal handing weapons to a rival, and a groundbreaking announcement for a superfactory—all happening on the same day—fits Elon Musk’s persona well.

But the “drama king” Musk has never been just about acting; what exactly does he want to do?

How xAI died

In July 2023, Musk announced the establishment of xAI with great fanfare. The founding team consisted of 11 core talents from DeepMind, OpenAI, and Microsoft Research, with the mission to “understand the nature of the universe.” The motivation was straightforward: to oppose OpenAI, which betrayed its open-source original intent. Musk invested not only money but also exclusive resources: over 500 million real-time data points daily from the X platform for training data, and the Colossus, the world’s largest AI training cluster built from scratch in 122 days.

Image: xAI was founded in 2023, with the official mission to “understand the universe.”

xAI has never lacked funding. It has raised over $42 billion, and after its latest funding round in January 2026, its valuation rose to $230 billion. Investors include NVIDIA and Cisco. It also has no shortage of computing power.

xAI claims that by the end of 2025, Colossus I and II will have combined over 1 million H100-equivalent GPUs.

What it lacks is people.

Starting in January 2026, the founding team gradually left. By mid-February, when SpaceX announced its acquisition of xAI, half of the 11 had already departed. In mid-March, Musk stated on X, “xAI has not built the right foundation this time; we are rebuilding from the ground up.” On March 28, the last two founders, Manuel Kroiss, responsible for pretraining, and Musk’s long-time assistant Ross Nordeen, also confirmed their departure. Thus, all 11 co-founders had left.

The people Musk recruited initially to fight OpenAI voted with their feet, declaring the fight a failure. Grok, as a product, is not without market potential: according to Apptopia, its US mobile market share rose from 1.9% in January 2025 to 17.8% in January 2026, with a global web share of about 3.4%. But in the developer and enterprise markets, it is almost nonexistent. Claude Code’s annual revenue reached $2.5 billion in 2025, and ChatGPT enterprise clients number in the millions. Grok has no comparable products in these two arenas.

xAI died due to an ironic paradox: it has nearly the most GPUs in the world but cannot retain people capable of building models.

Renting 220k GPUs to Anthropic: weapons handed to a rival

On May 6, according to foreign media reports, SpaceXAI signed a compute cooperation agreement with Anthropic. The core of the deal is: Anthropic gains exclusive use of all computing resources at Colossus 1 data center (located in Memphis, Tennessee, equipped with over 220k NVIDIA GPUs, with a total capacity exceeding 300 MW). Anthropic will use these resources to increase the user capacity of Claude Pro and Claude Max, and to expand the computing power of Claude Code. MarketWatch, under Morningstar, reported that this move is to address the compute bottleneck faced by Claude Code.

Image: On May 6, Anthropic announced a partnership with SpaceX to enhance Claude Code and Claude API capacity.

They also signed a more imaginative letter of intent: collaborate on developing “multi-gigawatt-level orbital AI compute.”

The absurdity of this deal is revealed in the timeline. On April 28, Musk’s $150 billion lawsuit against OpenAI, Ultraman, and Brokman opened in a federal court in Northern California, with three weeks of fierce debate. In the same week, Musk’s company signed over the entire capacity of its largest AI training asset to OpenAI’s biggest competitor. Musk’s original purpose in founding xAI was “to oppose OpenAI”—now xAI’s legacy has become a compute base to help Anthropic catch up and surpass OpenAI.

Musk also added a condition in his post: “SpaceX will provide computing resources to other AI companies, provided they use their models to benefit all humanity.” This is almost a verbatim quote from OpenAI’s founding declaration in 2015.

But behind Musk’s story, the economic logic is more straightforward. After Colossus 1 is built, it mainly trains Grok. If the team that created Grok all leaves, this multi-billion-dollar facility becomes a pure cost sink—covering electricity, cooling, maintenance, depreciation, and so on—burning money daily. Leasing it out entirely can immediately generate stable cash flow income.

Terafab: the real intention behind building a $55 billion chip factory

The third announcement on the same day: SpaceX and Tesla jointly submitted a proposal to build a semiconductor manufacturing facility called “Terafab” in Grimes County, Texas. According to foreign media reports, the initial investment is at least $55 billion, with total investment reaching up to $119 billion once completed.

The logic of Terafab aligns with leasing Colossus to Anthropic.

Buying GPUs from others is constrained by NVIDIA’s supply chain, including long certification cycles, production prioritization, and no room for price negotiations. Building chips oneself means extending the “selling shovels” business model from leasing to manufacturing.

Terafab’s target is closer to a hybrid of TSMC (foundry manufacturing) and AWS (compute leasing). Musk aims to create an entire AI compute chain—from chip manufacturing to assembly into clusters to selling compute power to clients.

“Space compute”: just an IPO story?

Beyond that, Musk’s compute story is even more imaginative. He has repeatedly stated: “Earth’s energy and heat dissipation limits will soon constrain AI compute development; within two to three years, the minimum cost of generative AI computation will shift to space.”

There is physical plausibility: in the vacuum of space, radiative cooling efficiency far exceeds that of ground air cooling, and solar energy in orbit faces no atmospheric attenuation or day-night cycle. SpaceX has a unique structural advantage: its own rockets mean launch costs can be internalized; Starlink already has over 6,000 satellites in orbit, and the data transmission infrastructure is ready.

However, training large models can tolerate some communication delay, but inference services are extremely sensitive to latency. The physical lower limit of star-ground links is 20-40 milliseconds, plus network jitter and queuing, making it an order of magnitude slower than ground data centers. In other words, space compute might handle training loads but cannot, in the short term, replace ground-based inference clusters.

A more critical constraint is economics. Even if SpaceX halves the cost per kilogram of launch, the cost per watt of space data centers will still be significantly higher than ground-based ones within 2-3 years. Unless ground electricity and cooling costs surge to a critical point, space compute’s economic viability is questionable.

But this narrative serves very different purposes: it influences IPO investors and motivates engineers. “Rocket company” + “AI space infrastructure company” is full of future imagination. According to foreign media reports, SpaceX secretly filed for an IPO on April 1, under the code name “Project Apex,” hired 21 investment banks, aiming to go public in June with a valuation of $1.75 trillion and raising $75 billion. A valuation of $1.75 trillion clearly relies heavily on storytelling and imagination.

Looking at the three events of May 6 together, Musk also wants to be an “AI water seller.”

X platform’s 500 million daily real-time data points are the data source; Grok model remains operational but downgraded from a “mission” to a “product”; the compute side has the Colossus cluster and Terafab chip factory; communication has Starlink’s global coverage; launch vehicles include Falcon and Starship. From data to models, chips, bandwidth, to launch, Musk aims for full-chain vertical integration.

However, if in two years the AI investment bubble bursts, enterprise AI spending shrinks, and compute demand growth slows, the $55 billion chip factory becomes a sunk cost, and space data centers turn into orbital debris.

Nevertheless, Musk might still win—his bet is that AI will never stop.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin