Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
The prosperity of AI computing power may be an illusion supported by two companies.
Overseas tech commentator Ed Zitron recently made a sharp judgment: the current AI compute economy may not be supported by broad and healthy market demand, but rather heavily relies on two companies, OpenAI and Anthropic. Cloud service providers invest in AI companies, which then use the funds to purchase cloud services and computing power, forming a cyclical growth narrative.
This view may not represent the full truth, but it reminds us: to judge whether the AI boom is sustainable, we should look beyond funding amounts and data center construction scales, and focus more on real customers, cash flow quality, and ultimate demand.
Introduction
In the past two years, the most prominent growth story in the AI industry has not only been the leap in large model capabilities but also a capital expenditure frenzy centered around GPUs, cloud services, and data centers. Giants like Microsoft, Amazon, Google, and Oracle continue to ramp up AI infrastructure, with NVIDIA becoming the most dazzling beneficiary of this cycle.
But a sharper question is emerging: who will ultimately use these newly built data centers? If the main large clients are only OpenAI and Anthropic, then is the so-called AI compute prosperity merely a cyclical narrative sustained by a few companies, a few cloud providers, and a few capital transactions?
American tech critic Ed Zitron offers a very radical but worth-discussing judgment in his article “Premium: AI’s Circular Psychosis”: the AI economy is forming a “circular delusion.” In this cycle, cloud giants invest in AI companies, which then pay cloud giants for compute; cloud giants confirm future revenue and continue expanding data centers and purchasing GPUs. Each link appears to be growing, but if the ultimate demand is insufficient, this mechanism could become extremely fragile.
Zitron’s core judgment isn’t complicated: a significant part of the entire AI economy actually hinges on OpenAI and Anthropic. According to his analysis, these two companies not only occupy a large portion of Amazon, Google, and Microsoft’s AI compute capacity but also contribute a substantial share of these companies’ AI revenue; more critically, they may also account for a large part of the future backlog of cloud service orders.
This means what the market sees isn’t just “growing cloud computing demand,” but a highly concentrated customer structure: cloud providers’ AI orders come from AI companies, whose payment ability is driven by financing and cloud investments. In other words, funds aren’t simply flowing from end customers to model companies and then to cloud providers; rather, there’s a significant cycle among investors, cloud service providers, and AI companies.
This structure isn’t necessarily unsustainable. Early tech industries often relied on financing for growth—cloud computing, electric vehicles, shared mobility all went through similar phases. The problem is that the scale of investment in AI infrastructure is enormous, and currently, few companies seem capable of continuously consuming large-scale GPU compute as market imagination suggests.
The chart shows that OpenAI and Anthropic’s commitments to spending on Microsoft, Oracle, Google, and Amazon account for a significant proportion of these cloud providers’ backlog orders. Pink indicates OpenAI’s commitments, orange for Anthropic, gray for other backlog orders. Source: The Information, cited from Where’s Your Ed At;
If this calculation holds, a warning conclusion is that: a large part of the so-called future revenue of cloud giants doesn’t just depend on AI demand, but on OpenAI and Anthropic’s ability to continue financing, expanding, and paying huge cloud bills.
Zitron’s criticism of Anthropic is especially sharp. He believes that Anthropic’s problem isn’t just losses, but that it has formed a cyclical financial relationship with Amazon and Google: cloud giants invest in Anthropic, which then spends money on cloud services and compute, generating revenue expectations for the cloud giants and further expanding infrastructure.
From a financial narrative perspective, this seems like a win-win: AI companies get the compute needed for training and inference, cloud vendors secure major clients, and capital markets get a growth story. But if Anthropic itself lacks sufficient revenue and profit capacity, its ability to pay cloud bills heavily depends on external financing.
This is the key point of the “circularity” mentioned in the article: a cloud provider’s future revenue may depend on whether the AI companies it invests in can keep financing; meanwhile, AI companies’ growth stories rely on cloud providers continuously providing compute, investments, and discounts. On paper, this looks like a high-growth chain; from another angle, it’s also a dependency risk chain.
For Chinese readers, this isn’t unfamiliar. Any high-investment industry in rapid expansion often follows a “build infrastructure first, then wait for demand to materialize” logic. The difference is that AI compute unit costs are extremely high, and technological depreciation is rapid. If real demand falls short of expectations, sunk costs could be very heavy.
Another case worth noting is Anthropic taking over SpaceX, xAI, and Elon Musk’s Colossus-1 data center with 300MW capacity. Musk once called Colossus-1 “the world’s most powerful AI training system,” claiming its purpose was to train Grok. But now, this capacity has been transferred to Anthropic.
Zitron sees this as a very unusual signal: if even large model companies like xAI don’t need to build all their own capacity, then outside of OpenAI and Anthropic, how many truly large-scale GPU buyers are there in the market?
This question is critical. Over the past year, the narrative has often been that “AI compute is forever insufficient.” But “insufficient compute” requires specific customers to support it. Who will buy long-term? Who can pay? whose revenue can cover inference and training costs? These questions can’t be answered solely by “future demand.”
Zitron mentions that Sightline Climate’s statistics show 15.2GW of capacity is under construction, expected to be completed by the end of 2027. If these capacities ultimately need thousands of companies to absorb via large-scale GPU rentals, the market must prove: where are these companies, what are their business models, and do they have enough revenue to pay for compute costs?
Another key point in the article is that there’s a high transmission relationship between AI software revenue and compute revenue. Many AI startups seem to be generating revenue, but to provide services, they need to invoke models from OpenAI or Anthropic, or rent GPU compute from cloud providers. As a result, the funding and revenue of startups ultimately flow to a few core model companies and cloud infrastructure providers.
This pattern results in two outcomes. First, industry chain income becomes increasingly concentrated at the top. Second, even if downstream application companies see revenue growth, they may struggle to generate healthy profits because model invocation and compute costs continue to eat cash flow.
This is also why the prosperity of AI application layers cannot be simply equated with the overall industry prosperity. If many application companies only turn financing into API call fees, lacking pricing power and profit margins, they are more like channels for core model companies rather than independent, sustainable businesses.
From a media perspective, this is especially relevant for domestic AI entrepreneurs. The Chinese large-model industry faces similar issues: whether application-layer companies can break free from high costs of underlying models and cloud resources, and whether they can develop their own data, scenarios, and customer stickiness, will determine if they are just “model capability display layers” or truly sustainable companies.
Zitron further points out that the influence of OpenAI and Anthropic extends beyond cloud providers. Their compute demands also continue to spread outward through NVIDIA, server ODMs, new cloud companies, and data center developers. As long as the market believes AI compute demand will grow infinitely, sales of GPUs, server orders, data center construction, and cloud company valuations can all be supported.
But the core issue remains demand quality. An industry can temporarily create prosperity through capital expenditure, but cannot sustain it long-term without genuine demand. If new cloud clients are still mainly from OpenAI, Anthropic, Meta, or cloud giants serving these companies, then the entire ecosystem’s customer concentration will be very high.
This doesn’t mean AI lacks value or that large models have no long-term demand. On the contrary, AI is transforming software, content, search, programming, and enterprise services. But what the capital markets price isn’t necessarily “AI usefulness,” but whether “AI can support hundreds of billions of dollars in infrastructure expansion.” There’s a huge difference.
It’s important to note that Zitron’s stance is very clear and even highly critical. He describes the current AI compute economy as a “huge scam, illusion, and mistake.” Such judgments are certainly not industry consensus and shouldn’t be taken as definitive.
But the questions he raises are indeed worth serious discussion.
For the Chinese market, it’s more valuable not to simply judge whether “AI bubbles will burst,” but to observe the AI investment boom differently: don’t just look at model parameters, funding amounts, GPU counts, and data center scales, but focus more on who the ultimate customers are, where the revenue comes from, who bears the costs, and whether profits can be closed.
If AI truly delivers enough productivity gains, compute infrastructure will be absorbed. But if most growth is driven by capital, cloud bills, and future order cycles among a few companies, then the fragility of this prosperity is likely higher than it appears.
Conclusion: The key issue in AI shifts from “Is there demand” to “What is the demand quality”
AI’s long-term value doesn’t necessarily equal that all current AI infrastructure investments are justified. Large models may continue to improve, AI applications may keep spreading, and companies may further automate. But at the same time, the cyclical relationship between capital expenditure, cloud revenue, and GPU demand still needs more transparent scrutiny.
The most valuable aspect of this article isn’t whether it’s entirely accurate, but that it reminds us: the real risk in the AI industry isn’t “no one uses AI,” but “AI usage income isn’t enough to cover the costs of building AI.”
When an industry increasingly depends on a few super clients, a handful of cloud giants, and continuous financing to sustain growth narratives, investors, entrepreneurs, and observers should ask the same question: is this a new wave of infrastructure development, or a capital illusion built on future income and cyclical payments?