Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Microsoft and OpenAI "part ways": The era of exclusive models has ended
Author: Ada, Deep Tide TechFlow
Recently, Microsoft and OpenAI jointly announced revisions to their partnership agreement. The exclusive cloud restrictions were lifted, IP licensing was downgraded from exclusive to non-exclusive, and the AGI escape clause was removed.
After the news broke, nearly all Chinese media were asking the same question: who won? But that is not the core issue.
What this “breakup” truly buried was the competitive logic of an entire era in the AI industry: whoever bound the best models, won.
And under the new rules of the game, the stakes have shifted from models to something else.
Models Are No Longer Scarce
First, let’s look at a set of numbers.
The total infrastructure commitments OpenAI has disclosed are: 250 billion US dollars with Microsoft Azure, 300 billion US dollars with Oracle’s Stargate project, and 138 billion US dollars with Amazon AWS (including the original 38 billion plus an additional 1000亿, over an 8-year period).
That adds up to more than 680 billion US dollars, while OpenAI’s annualized revenue is only about 25 billion US dollars.
A company earning 25 billion US dollars a year signed compute bills totaling more than 680 billion. OpenAI is essentially selling itself to compute providers; it is now a flagship anchored customer for the three major cloud vendors.
Anthropic is the same. Last week, it just signed an expanded cooperation with Amazon, committing to spend more than 100 billion US dollars on AWS over the next ten years in exchange for 5 gigawatts of compute power. Four days later, it signed a 3.5 gigawatt TPU capacity agreement with Google and Broadcom, expected to go online in 2027. Add Google’s announcement last week of an investment of up to 40 billion US dollars, and Anthropic is now locked in by two cloud giants at the same time.
Both of the two most cutting-edge AI companies are exchanging their companies’ futures for compute capacity.
Now look back: what exactly did Microsoft buy with the 1 billion US dollars it invested in OpenAI in 2019?
It was the exclusive distribution rights for models. Azure had exclusivity over the GPT series—if customers of other cloud providers wanted to use OpenAI’s models? Sorry, move to Azure.
That was the era of “model scarcity.” GPT was the only mainstream large language model that could really hold its own. Whoever had it had pricing power.
But the reality in 2026 is this: models are no longer scarce.
Anthropic’s Claude, Google’s Gemini, and Meta’s open-source Llama all run on multiple cloud platforms. Ramp’s enterprise spending data shows that 79% of enterprises that pay for Anthropic also pay for OpenAI. Corporate customers simply do not want to be locked into a single platform.
OpenAI itself has also figured it out. In a March internal memo, Chief Revenue Officer Denise Dresser put it plainly: “Our partnership with Microsoft laid the foundation, but it also limits our ability to meet the real needs of enterprise customers.”
In other words, exclusive binding used to be an advantage, and now it’s a shackle.
The model layer is being rapidly commoditized. When all mainstream models can run on all mainstream clouds, the value of exclusive model distribution rights approaches zero.
So what is the value that’s rising? It’s compute power.
Look at the data. In two months, Amazon has thrown hundreds of millions of US dollars at both OpenAI and Anthropic. Google is investing 40 billion US dollars into Anthropic while continuing to invest in its own Gemini. Microsoft is loosening its grip on OpenAI, while also letting Mustafa Suleyman lead independent superintelligence research.
At the core of every deal are compute, chips, and data centers. Models, in contrast, have become something of a throw-in.
Electricity Is Like Oil
Let’s return to the revised agreement between Microsoft and OpenAI.
On the surface, OpenAI gained freedom to sell models on AWS and Google Cloud. Although Microsoft lost the exclusive rights, it kept a 27% equity stake and a non-exclusive IP license until 2032.
Turning exclusive into non-exclusive sounds like OpenAI won, but the 250 billion US dollars Azure procurement commitment is still there, and OpenAI products will still be prioritized for launch on Azure unless Microsoft chooses not to support it. That also hasn’t changed. This is not “de-linking”—it’s replacing a chain with a pipeline. Previously it was lock-in through contracts; now it’s lock-in through infrastructure.
OpenAI’s current situation is that it has signed compute contracts for 250 billion US dollars with Azure, 1380億 with AWS, and 3000亿 with Oracle. Each contract is multi-year and comes with specific chip architectures and deployment plans. Technically, it has gained “multi-cloud freedom,” but financially it is simultaneously bound by the compute contracts of three cloud vendors. This looks more like going from one landlord to three landlords.
If you look even further ahead.
In 2023, ChatGPT suddenly emerged, and everyone said: models are the new oil. Whoever controls the best models controls the future.
Two and a half years later, oil has become tap water. Models are still important, but they are no longer scarce. What is truly scarce is the electricity, chips, and physical space required to run those models.
This is very similar to how the internet evolved in its early days. In the 1990s, everyone was competing for content and traffic entry points. In the end, the winners were the ones who managed the pipelines: Cisco, AT&T, AWS.
Now the AI industry is going through the same turning point. Model companies think they are the main characters; after signing compute contracts, they look back and realize they have already become long-term users of cloud vendors. Those contracts worth hundreds of billions of dollars are not empowerment agreements—they are entrapment agreements.
What did Microsoft give up when it abandoned OpenAI’s exclusive distribution rights? A 250 billion US dollars Azure revenue commitment.
From a business perspective, did Microsoft lose money?
A CNBC report mentioned that Barclays analysts believe this is a marginal positive for Microsoft. It will no longer need to shoulder the full funding pressure of building OpenAI’s data centers, and can free up money for Copilot and other cloud businesses.
Microsoft exchanged “exclusive rights” for “certain revenue.” It shifted from a venture capital logic to a utility logic.
The entire AI industry is undergoing this kind of transformation. The speed at which leading model companies burn cash keeps increasing, while the bills cloud vendors receive keep getting thicker. The valuations of model companies swing wildly, but the cash flow of cloud vendors keeps growing steadily.
Axios mentioned a detail in a report last week: OpenAI wrote to investors the week before, calling compute scale its core competitive advantage over Anthropic and claiming that Anthropic made a “strategic mistake” for not acquiring enough compute capacity.
A few days later, Anthropic signed two new compute agreements totaling more than 8 gigawatts.
This is the AI race of 2026: not who has smarter models, but who locks in more electricity.
And within this restructuring, there is a beneficiary that is rarely discussed: Amazon.
Amazon now holds large equity stakes in both Anthropic and OpenAI. Both of the two leading AI labs have committed to spending more than 100 billion US dollars on AWS.
Invest 50 billion US dollars into OpenAI in exchange for 1380 billion US dollars of AWS revenue. Invest 330 billion US dollars into Anthropic in exchange for more than 100 billion US dollars of AWS revenue.
Amazon doesn’t care who wins. It cares that no matter who wins, the electricity bill gets sent to it.
The Truth About Contracts
The day after Microsoft and OpenAI announced their “de-linking,” The Wall Street Journal published a report saying that OpenAI had failed to meet internal revenue targets for multiple consecutive months in the first quarter of 2026, and user growth was also below expectations.
In an internal warning, CFO Sarah Friar said that if revenue growth didn’t accelerate, the company might not be able to afford future compute contracts.
The reality is that revenue is still stuck at 25 billion, while compute contracts have already been signed for more than 680 billion.
Market reaction is more honest than any commentary. On the day the WSJ report came out, Oracle’s stock fell 7.7%, CoreWeave fell 7.4%, SoftBank’s shares in Tokyo fell by nearly 10%, and Nvidia, AMD, and Broadcom declined by 2% to 6%. Investors were not selling OpenAI—they were selling every company that was counting on OpenAI to cash in on its compute bills.
Gabelli Funds fund manager John Belton told CNBC that OpenAI’s growth clearly slowed from the end of 2025 to the beginning of 2026, as market share was being eaten away by Anthropic and Gemini. With too many compute contracts signed, OpenAI can’t pay the bills.
That is the real picture of the “end of the exclusive era.”
OpenAI gained the freedom to sell models across three clouds. The cost is that it is bound by compute contracts from all three. It has gone from being Microsoft’s exclusive partner to being a long-term paying customer of Azure, AWS, and Oracle. Each contract is multi-year, and each comes with a specific chip architecture and deployment plan; each assumes that revenue will continue to grow at high speed.
OpenAI thought it had bargaining power, but in 2026—when compute supply is tight—that bargaining power is not on the model companies’ side. Whoever has electricity, chips, and physical space gets to decide. Those multi-billion-dollar contracts the model companies sign are not procurement agreements—they’re selling contracts. Once signed, the cost of moving is so high it becomes unacceptable. When models run on Trainium for two years, migrating to another chip architecture requires re-optimizing the entire training pipeline; it’s not as simple as switching cloud accounts.
The “breakup” between OpenAI and Microsoft looks like an independent declaration for the AI industry, but when you open up the contract details, you find that the 250 billion US dollars Azure commitment is still there, CFOs are internally warning that the bills may be unaffordable, revenue is missing targets in a row, competitors are stealing market share—and all of these problems share one solution premise: assuming that 2030 revenue is 11 times today’s.
Those who manage the pipelines never talk to you about ideals. They only talk about contract durations, delivery timelines, and breach clauses.
The final winners of this AI arms race may be neither the company with the best models nor the one with the most funding. It may be the infrastructure providers who take deposits, sign long-term contracts, and collect rent regardless of who wins. Like the story of gold miners that always repeats itself—the ones who end up getting rich are always the ones who sell shovels.