Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
xAI rents computing power to Anthropic: Musk's computing empire begins to leak
Written by: Little Biscuit, Deep Tide TechFlow
If three months ago you told a Silicon Valley investor that Elon Musk would lease all of xAI’s largest training cluster, Colossus 1, to Anthropic, they would probably laugh out loud.
After all, in February, Musk was still on X berating Anthropic for “hating Western civilization,” and in March he gave Anthropic the nickname “misanthropic” (disliking people). In Musk’s eyes, this company is almost a byword for politically correct AI—an opponent that, like OpenAI, must be brought down.
Then on May 6, Anthropic and SpaceX jointly announced: Anthropic would receive all the computing power of Colossus 1—more than 220,000 Nvidia GPUs, a 300-megawatt power capacity—and would complete delivery within one month. Anthropic explicitly stated that this computing capacity would be used directly to improve the service experience for subscribers of Claude Pro and Claude Max.
On X, Musk posted a message that left everyone stunned. He said he had been in deep contact with Anthropic executives over the past week, “impressed,” and that “they are all capable and are seriously doing the right things.” He even said that Claude would “most likely be good (probably be good).”
On the same day, he announced that xAI would be dissolved as an independent company and renamed SpaceXAI.
This is a capacity transfer.
Mainstream English media wrote this up as a “landmark event in AI compute sharing,” but they missed one key fact:
Colossus 1 is xAI’s core training facility—not some “spare capacity.”
Let’s review the timeline. Colossus 1 was completed in Memphis in September 2024. It took only 122 days from groundbreaking to being powered on—a miracle in the history of data center construction. It is the main cluster that xAI uses to train Grok 3 and Grok 4, and it is the physical carrier of Musk’s “compute is power” narrative. It is equipped with more than 220,000 GPUs, including H100, H200, and the latest GB200. As of the end of 2025, the cluster size ranked among the global top three.
Handing over a training cluster of this scale in its entirety to a direct competitor is equivalent to TSMC leasing the entire capacity of its 5-nanometer production lines to Samsung. Something like this has never happened in the semiconductor industry. Anyone who has gone through a cycle knows this kind of move only appears under one circumstance: you can’t use the capacity yourself.
And SpaceXAI’s official claim is: Anthropic’s compute will “directly benefit Claude Pro and Claude Max subscribers.” That is to say, the compute Anthropic gets is used for inference—to run models for paid Claude users, and to handle user requests from the AI that Musk hates the most.
It’s not accurate to sum this up with the four words “customer collaboration.” In a sense, the actual control of Colossus 1 has changed hands.
Grok can’t support the scale of Colossus
Why was it “underutilized”?
The most direct answer is hidden in Grok’s user data.
According to Similarweb data released in April, Grok’s daily active users (DAU) on global mobile applications fell from 13.9 million in March to 12.2 million in April, a 12.5% month-over-month decline. In the U.S. market, the drop was even sharper—from 1.4 million to 1.1 million, a 15.6% decline month over month. A year ago, it was the second-largest AI application globally, behind ChatGPT; by April, it had fallen to fifth place, overtaken in sequence by Claude, Gemini, and DeepSeek.
Meanwhile, Claude’s DAU rose from 16 million to 23 million, a 44% month-over-month increase.
This is a brutally stark comparison: in 2026, a year when AI applications generally grow rapidly, Grok is one of the few top products losing users. The reason isn’t complicated. Grok’s core scenarios have always been tied to the X (formerly Twitter) platform itself, existing as a tool for “real-time search + biting commentary.” But in the standalone app and web versions, it never formed the same kind of “workflow stickiness” that Claude has. Many Reddit users complain that Grok has gradually moved image and video generation features behind paywalls; combined with regulatory investigations in multiple countries and Apple’s threats to ban it, its growth engine has basically sputtered to a near stop.
What’s worse is inside xAI.
According to a report by Fast Company in April, more than 80 employees left xAI over the past few months, including several co-founders. A February report by the Financial Times said Musk has continued to put “unreasonable technical target” pressure on the team, trying to catch up with competitors—an archetypal reaction from a leader during a phase when the product is losing ground step by step.
Put these two things together, and the answer to why Colossus 1 has excess capacity becomes clear: it was built for a Grok that was much larger than the one today.
The real challenge for SpaceXAI: the valuation narrative
“Insufficient Grok demand” is only the surface.
Deeper down, the logic is this: Musk needs a new story to support SpaceXAI’s $1.25 trillion valuation.
Recall what happened in February this year. SpaceX acquired xAI in an all-stock deal; after the merger, the entity’s valuation was $1.25 trillion, the largest M&A deal in history. Before the acquisition, xAI’s most recent funding round was Series E in January—$20 billion, with a valuation of $230 billion. Putting xAI into SpaceX is, in essence, using SpaceX’s rocket-business cash flows to keep alive xAI’s money-burning black hole, which still loses $1.46 billion every quarter.
But even with SpaceX’s lifeline, SpaceXAI still faces a sharp problem: why is it worth that much?
OpenAI’s most recent valuation is $852 billion. ARR is about $24–25 billion, implying a valuation-to-revenue multiple of around 35x. Anthropic is negotiating a $900 billion valuation, with ARR of $30 billion—about 30x.
What about xAI? In Q3 2025, its revenue was $107 million and its net loss was $1.46 billion. Even if you take an optimistic estimate of Grok’s expected 2026 revenue at $2 billion, the portion corresponding to SpaceXAI would still have a valuation-to-revenue multiple far higher than OpenAI and Anthropic. In other words, Musk urgently needs to tell SpaceXAI a new cash-flow story. He can’t do it with Grok’s user growth, and he can’t do it with enterprise API revenue either.
Leasing Colossus to Anthropic is the opening of that story.
It instantly reframes SpaceXAI from a “model company” into an “AI cloud infrastructure provider”—a bit like CoreWeave, but with a larger scale and greater power supply. In the world of narrative valuations, cloud companies are worth more than model companies. Cloud companies can show long-term contracts and predictable cash flows—something that pure model companies have great difficulty providing.
Add to that the vague “orbital compute center” memorandum between Anthropic and SpaceX, where both sides agreed to “explore” deploying multi-gigawatt-class AI data centers in space, and you can see where all of this is pointing. This is a new set of assets on SpaceX’s balance sheet prepared for an IPO: rockets, Starlink, ground data centers, orbital compute—packaged into a massive infrastructure story. Grok itself becomes largely irrelevant. What matters is the GPUs, power, and launch pads in Musk’s hands.
The true meaning of Musk’s 180-degree attitude shift
Within this framework, there is another explanation for Musk’s dramatic change in stance toward Anthropic.
It’s a deal.
Beyond rent, what Anthropic is providing to SpaceXAI is credit backing. By publicly endorsing the availability, scalability, and operational quality of Colossus 1, Anthropic is effectively issuing SpaceXAI an entry ticket into the “compute infrastructure club.” Members of this club include AWS, GCP, Azure, and CoreWeave. Before this, xAI’s reputation in cloud services was roughly equivalent to zero. It previously only used compute to train its own models, and never truly operated as an external provider.
For Anthropic, this deal is also extremely worthwhile. It is raising capital at a $900 billion valuation, possibly preparing for an IPO in October. Its publicly disclosed need is 5 gigawatts of training compute. The 300-megawatt capacity SpaceX provides may not sound large, but the value lies in “immediate delivery”: being powered on within a month, directly easing Claude’s current inference pressure. In April, Anthropic openly admitted that Claude’s “reliability and performance” were affected during peak periods due to “infrastructure pressure.” The emergency capacity of 300 megawatts is worth far more than the number on paper.
This is a two-way narrative transaction. Anthropic gets service stability; SpaceXAI gets a valuation story.
Who conceded? Musk himself did, in the sense that he made a deal with a long-time rival and spoke well of the other side. But at a deeper level, the concession is Grok. As a product, as a model company, and as Musk’s flagship weapon against OpenAI/Anthropic, Grok is quietly being downgraded into just another ordinary business within SpaceXAI’s portfolio. Core strategic assets like Colossus are being freed up for customer use—which means Musk is no longer treating “self-developed models” as the main battlefield.
In that sense, May 6 marks the end of Grok’s era as a “frontier model company.”
An industry inflection signal: capacity begins to concentrate among a few players
Zoom out further, and the industry significance of this event may be larger than what we can see right now.
Throughout all of 2024 and 2025, the AI compute market was in a state of “industry-wide frantic scramble.” OpenAI grabs, Anthropic grabs, xAI grabs, Mistral grabs, and sovereign funds from various countries grab as well. GPUs are hard currency. Data center siting is a geopolitical issue. Power supply is a national strategic issue. In an environment where everyone is short on supply, no one would lease their training clusters to a competitor, because every GPU hour you lease out today could be the key compute needed to catch up tomorrow.
And now, xAI has done it.
This means the first signs of segmentation are emerging in the AI compute market. The compute demand of leading model companies (OpenAI, Anthropic, Google DeepMind) is still growing exponentially, while second-tier and below model companies begin to see capacity loosen. This kind of segmentation appears in the middle-to-late stages of every capacity expansion cycle—on the timeline from solar cells to EV battery cells to Bitcoin mining rigs, the script is almost identical. In the early stage everyone is short. In the mid-cycle, capacity starts to overflow to second-tier players. In the late stage, top-tier players consolidate upstream and downstream, while second-tier players either pivot into becoming infrastructure service providers, get acquired, or die off.
CoreWeave is the best comparison. It started as an Ethereum mining farm. In 2018, during the window of GPU surplus, it pivoted to AI cloud. At its 2024 IPO, it had a market value of $60 billion. The existence of CoreWeave itself proves that the path “if models don’t work, switch to compute” can work. SpaceXAI is retracing that path, but more aggressively. Besides selling ground compute, Musk also wants to sell compute into space.
The true signal of the peak of the AI bubble might be that second-tier model companies are rapidly turning into cloud service providers. When the core industry narrative shifts from “I have the best models” to “I have the most GPUs,” it usually means differentiation-based competition has reached its end.
One detail worth noting is this: in Memphis, where Colossus 1 is located, xAI deployed dozens of natural gas turbine units to supply power during construction in order to meet deadlines, claiming “temporary use” that did not require federal approval. Local residents have continued to protest over air pollution issues, and the matter has not been resolved even now.
And now, these GPUs powered by gas turbines will be used to run Anthropic’s Claude—one of the most rigorous labs in AI ethics and climate issues.
Even more absurdly, in their announcement, Anthropic and SpaceX said they are “interested” in deploying multi-gigawatt AI compute in orbit. Musk’s logic is: Earth’s power and heat dissipation will eventually be insufficient, and the future of AI is in space.
Across from Memphis’s gas turbines and Musk’s PPT about orbital solar panels lies a huge expectation for valuation. Leasing Colossus 1 to Anthropic is Musk’s first new story used to support that valuation outlook.
xAI took only three months to transform from a challenger into a supplier. Who will be the next one to be repriced?