Who is the top stock in American large models?

At an AI summit, Altman and Amodei refuse to hold hands

Author: Su Yang, Tencent Technology

OpenAI and Anthropic’s race to go public is the IPO competition in Silicon Valley that’s getting the most attention.

Both companies don’t want to be left behind the other, and they both hope to complete their IPO by the end of 2026. But behind the race for the title of “first AI model stock,” there are clear differences in their financial situation and internal pace.

OpenAI CEO Sam Altman wants to list as soon as possible, but his chief financial officer believes the company isn’t ready yet. Anthropic’s revenue growth is fast, but it also faces intense pressure from huge compute costs. Both companies rely on large-scale compute power to maintain competitiveness, and the payback period for such investment is far from certain.

OpenAI’s internal disagreements

Altman wants OpenAI to go public at the earliest in this year’s fourth quarter, but according to confidential financial documents that OpenAI shared with investors in its latest funding round, the company expects its cumulative losses to exceed $200 billion before it starts generating positive cash flow.

A financial document shows that OpenAI expects its compute spending in 2028 to reach $121 billion. Even if sales that year nearly double compared with the prior year, the company still expects losses of $85 billion. This scale of losses is extremely rare among public companies.

OpenAI revenue roadmap

But CFO Sarah Friar’s view differs from Altman’s. She doesn’t think the company can be ready to go public in 2026.

Friar’s reasoning is: the company’s work on process and organizational setup hasn’t been put in place, and the risks brought by spending commitments are too large. She also isn’t sure whether OpenAI needs to invest so much money in AI servers in the coming years, or whether revenue growth that has already slowed can support those commitments.

In addition, Amazon and Nvidia currently hold a fairly large stake in OpenAI. As “strategic shareholders” that come with strong linkage and strong bet-style terms, that too could affect their timing to go public.

As for the disagreement between the CEO and CFO, in public, Friar has shown a tendency to downplay it—only emphasizing that an IPO is “not under consideration at this time,” because OpenAI is still working to “reach a continuously upgraded state that matches our current scale.”

From the perspective of the IPO, it’s clear that subtle changes have emerged between Altman and Friar.

In August 2025, Friar stopped reporting directly to Altman and instead began reporting to Fidji Simo, who had joined at the time to lead OpenAI’s applications business. This kind of arrangement is not common in large companies; the CFO typically reports directly to the CEO.

Multiple people who have worked with Friar told The Information that Altman excluded her from certain conversations related to the company’s financial planning. For example, in recent months, when Altman discussed server spending with one of OpenAI’s largest investors, Friar wasn’t present. But in prior conversations on the same topic, she had participated.

Another person who attended OpenAI’s top-level meeting earlier this year said the meeting involved major financial decisions, and Friar was not invited—also unusual.

What’s worth noting is that Friar’s concerns privately expressed are quite similar to Dario Amodei’s recent public remarks—Anthropic’s CEO.

In a podcast earlier this February, Amodei said: “Even if the technology really develops at the fastest pace I predict, it’s not clear whether revenue can keep up. But the problem is, when you buy data centers, you do it on that (expected revenue) timeline. If your judgment is off by a year or two, that could be catastrophic.”

Amodei believes that even if you’re wrong by just one year—or if growth isn’t tenfold but fivefold—the result is bankruptcy. He added: “I’ve got a feeling that some companies don’t seem to have seriously penciled this out. They don’t even know how much risk they’re carrying.”

So who are “some companies”?

Is Anthropic beautifying its statements?

Based on financial data obtained from The Wall Street Journal, Anthropic’s revenue growth momentum is stronger than OpenAI’s.

Its annualized revenue has already surpassed $30 billion, while that figure was about $9 billion at the end of 2025. When it announced its Series G financing in February this year, Anthropic said that more than 500 enterprise customers have annualized spending of over $1 million. Now, that number is over 1,000.

In less than two months, it doubled.

OpenAI vs. Anthropic profit comparison

According to data compiled by The Wall Street Journal, even including training costs (bar chart), Anthropic can turn profitable as early as 2028, while OpenAI needs until 2030. And if training costs are excluded (line chart), Anthropic basically reaches break-even in 2024 and 2025.

Mizuho Financial Group analysts estimate that Broadcom’s AI revenue from Anthropic will reach $21 billion in 2026 and $42 billion in 2027.


OpenAI vs. Anthropic annualized revenue across business segments

It needs to be pointed out that the two companies differ in how they calculate revenue, which is why OpenAI’s revenue growth pace isn’t as rapid as Anthropic’s.

One key difference is that—Anthropic counts its technical sales through cloud partners as revenue, while OpenAI does not. This makes Anthropic’s reported revenue look better on paper, while Anthropic responds that this aligns with standard accounting practices because the company is the principal in the transaction.

Also, although it has said in words that it’s worried about revenue not keeping up, Anthropic has never stopped investing in compute capacity.

According to Anthropic’s official disclosures, it has already signed new agreements with Google and Broadcom to receive next-generation TPU compute capacity measured in gigawatts, which is expected to go live starting in 2027. The vast majority of the added compute facilities will be located in the United States. Anthropic CFO Krishna Rao calls it “the most important compute investment commitment to date.”

Inference costs are another heavy burden.

OpenAI vs. Anthropic free cash flow comparison

While revenue from ChatGPT’s consumer-tier users is relatively large, paid users are only a small part—meaning that more inference costs aren’t translating into revenue. Anthropic is somewhat better off: most of its revenue comes from enterprise customers.

An OpenAI spokesperson said the company supports free users to promote technology adoption, and it can profit through advertising or converting subscribers, among other methods. The spokesperson emphasized that the company prioritizes growth over profits.

The pricing-model dilemma

How large-model companies should price in a way that avoids losses is still an unresolved question.

Recently, Ruo Fuli, head of Xiaomi’s large-model team, analyzed this issue in a post. She believes the Claude Code subscription system is cleverly designed, but it may not be profitable—even at a loss—unless Anthropic’s API profit margins can reach 10 to 20 times, and she expresses doubt.

“On a single user query, some wrapper tools initiate multiple rounds of low-value tool calls; each round is an independent API request, and each request carries an extremely long context window, often exceeding 100k tokens. Even with cache hits, it’s still wasteful.” Ruo Fuli said.

Based on Ruo Fuli’s calculations, the actual number of requests per query is several times the Claude Code framework itself. Converted into API pricing, the true cost could be dozens of times the subscription price—an “enormous pit.”

Ruo Fuli said, “Until large language model companies find a way to set prices reasonably without ending up with losses, they shouldn’t go blindly into a price war.”

She believes selling tokens at extremely low prices while opening the door wide for third-party wrapper tools seems beneficial to users, but it’s actually a trap. “Selling token extremely cheaply while throwing the door wide open to third-party harnesses looks great to users, but it’s a trap. If users end up wasting their attention on low-quality agent harnesses, unstable and slow inference services, and downgraded models to cut costs, then in the end they still can’t do anything—this isn’t a healthy cycle for user experience or retention.”

Conclusion

Both OpenAI and Anthropic are competing for the U.S. “first large-model stock,” and both are tied to ongoing fundraising and wager-style agreements. Both face the issue of having to keep burning cash, but the question of whether business returns have been fully validated is still unanswered.

However, their situations also show clear differences.

OpenAI has internal disagreements about the timing of going public, while Anthropic needs to control compute costs while revenue is growing rapidly. And judging from industry momentum and public reputation, Anthropic’s brand is starting to overtake OpenAI’s.

You could say that on the road of large-model exploration, no one will be the forever-first. Once the technical roadmap goes wrong, it’s possible to be surpassed by competitors. OpenAI was the first company to open up the ChatBot AI assistant, but it may not be able to stay ahead in every business at all times.

In fact, from the standpoint of a healthy industry, in a context where compute costs keep rising and pricing models are not yet mature, how to build a sustainable business model may be more important than the title of “first stock.”

But this judgment excludes the storytellers.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin