How many times has their net worth increased after leaving OpenAI?

The only way to truly have an informational advantage is to bet first before others set the price.

Over the past two years, everyone has been anxious, trying to find the answer to the same question: what’s the next sector to rise under AI?

Storage, optical modules, computing stocks, energy stocks, and so on—each few months a new narrative emerges, and each time some miss out, each time someone promises next time will be different.

Few people ask another question: what are those who understand AI best betting on?

The group that left OpenAI now has a combined net worth approaching $1 trillion. Their startups and investments are at the forefront of the beginning of the next AI era.

Dario Amodei founded Anthropic, with a potential valuation of $900 billion. Ilya Sutskever’s SSI has no product but is valued at $32 billion. Aravind Srinivas created Perplexity, valued at $21.2 billion. Mira Murati’s Thinking Machines Lab is valued at $12 billion.

So, the most important output of OpenAI in recent years might not be GPT-4, but rather the group of former employees who have left the company and are now contributing to society.

Among them, Leopold Aschenbrenner, the youngest who was fired by OpenAI, has become one of the most frequently cited names in the capital markets over the past two years.

His legendary story has been repeatedly analyzed by the media: at 23, he was dismissed from OpenAI, wrote a 165-page report titled “Situational Awareness,” and within a year, leveraged hedge funds from $225 million to $5.5 billion, heavily investing in nuclear power and fuel cells, hitting every target.

The story is too complete, the contrast too stark, and the outcome too successful. To this day, whenever investment logic in the AI era is discussed, he is almost impossible to ignore.

But Leopold is just the most visible among this group.

People who leave OpenAI have gradually taken two paths.

One is the route taken by Ilya, Mira, and Aravind: start a company, raise huge funding, and push a disruptive product—just like every Silicon Valley genius leaving their post.

The other is much quieter: some choose to bet, delegate execution to others, and focus solely on making judgments.

Leopold took the extreme form of the second path.

He entered the public markets, using an operator’s perspective in the AI industry, found mispriced assets in traditional energy stocks, and heavily bought in. He doesn’t understand energy, but he knows how much electricity AI will burn—that’s enough. This insight can’t be replicated by reading reports or attending industry conferences; only those who have been in that position can accumulate it.

Beyond this path, there’s another group doing similar logic but in different forms: smaller funds that can complete due diligence in hours instead of months, with a rejection list more valuable than an investment list. They form the most overlooked, yet most worth exploring, layer of this great exodus.

Most people leave a company with only their resume. Those leaving OpenAI carry with them a set of answers that others don’t yet realize they need.

One, there’s no second Leopold

Leopold’s heavy bets are on nuclear power company Vistra and fuel cell company Bloom Energy.

After two successful bets, he gradually rebalanced by the end of 2025, selling Vistra and further concentrating funds into Bloom Energy and data center infrastructure.

Traditional energy analysts focus on these two stocks, drawing grid expansion plans, comparing carbon tax policies, and building demand growth models. Leopold’s approach is completely different.

He has seen the scale of server farms at OpenAI, the electricity bills for training flagship models, and engineers discussing why the next-generation data centers must be located near nuclear power plants. These details aren’t in any financial report or analyst briefing, but they form a conclusion about energy demand that’s more real than any model.

This strategy is called “cross-industry cognitive arbitrage”: translating internal information from one industry into undervalued assets in another.

Historically, this was the patent of top macro hedge funds, relying on a global macroeconomic perspective.

Leopold did something more precise: using an operator’s perspective in the AI industry, he found pricing inefficiencies in the traditional energy market.

This path is very hard to replicate.

Two, Zero Shot: the most valuable thing is that rejection list

Evan Morikawa, founder of Zero Shot Fund, also an OpenAI alumnus with a solid technical background, has moved into venture capital.

Same alma mater, completely different path.

Leopold’s judgment comes from his hands-on experience in core AI roles—firsthand perception of model training costs, data center planning, and energy demand. Only by being in that position can one accumulate this knowledge; there’s no fast-forward button. Very few in OpenAI’s core roles are qualified to tackle this.

In April this year, a new $100 million fund quietly surfaced, called Zero Shot.

This is a term in AI training, referring to a model answering questions without having seen any samples.

Three co-founders are from OpenAI: Evan Morikawa, former head of DALL-E and ChatGPT application engineering; Andrew Mayne, the original prompt engineer; and Shawn Jain, former researcher and engineer.

They have already invested in three companies: AI workflow startup Worktrace, AI-enhanced factory robot company Foundry Robotics, and another stealth project.

$100 million, in today’s AI funds often worth billions, is a small figure.

But it’s more telling to consider what sectors they refuse to invest in.

Mayne openly states he’s bearish on most “atmosphere programming” tools—products that help you write code via natural language.

His reasoning is straightforward: he knows what OpenAI has accumulated in programming, and how quickly these tools’ moat will be eroded by foundational models. Morikawa also distances himself from many “human-centered video data companies” in the robotics sector—those collecting human motion data to train robots—believing this tech route will hit a wall.

These judgments are beyond what ordinary VCs can make.

They haven’t been at the source of information, haven’t seen internal discussions, so they can’t judge which path is dead end.

The advantage of Zero Shot lies in its rejection list. In a market where everyone is shouting about AI startups, knowing where the pitfalls are is more valuable than knowing who to bet on. Those who have mined the terrain find a “雷区踩雷报告” (minefield report) more useful than a treasure map.

They deliberately limit their scale to $100 million because of specific reasons.

They understand where their advantage is most valuable: in early stages when the technical route is still unconverged. At that stage, insiders can distinguish which paths are viable at a glance.

Once projects reach Series C or D, financial data and public information will overshadow informational advantages—this card is played out.

The larger the scale, the more one needs to chase “high certainty tracks,” relying on others’ methods.

$100 million is their honest judgment of their own boundary of advantage.

Three, Angel investing is a different business

Mira Murati and Zero Shot Fund both invested in former OpenAI colleague Angela Jiang’s Worktrace, a company that uses AI to optimize enterprise workflows.

But their investment logic is much more solid than just “good relationship.”

Mira has seen how Angela makes decisions under OpenAI’s high-pressure environment, her judgment on AI product boundaries, and her execution under real constraints. These aren’t things that can be faked in a two-hour founder pitch, nor can they be fully uncovered through due diligence.

Angela doesn’t need to convince Mira; Mira has already formed her judgment. The information cost of angel investing is near zero, but the quality of information far exceeds the market average.

A bigger flywheel is at Sam Altman’s place.

Reportedly, Altman decides within hours of hearing about an ex-employee’s startup whether to co-invest, stacking OpenAI Startup Fund’s capital and extensive API resources.

He doesn’t hold OpenAI equity himself, but every success of his alumni expands OpenAI’s data access, distribution channels, and policy influence. He’s maintaining an ecosystem that doesn’t belong to him but continues to generate returns—an invisible equity that compounds in reality.

This ecosystem leads many to mistakenly think it’s just old colleagues sticking together.

Compare it with the PayPal Mafia, and the differences become clear.

The PayPal Mafia’s cohesion came from shared hardship: surviving the payment wars, eBay’s acquisition, and the trenches of near-death years. That trust is real, but their future judgments are individual. Thiel invests in risk capital, Musk builds rockets, Hoffman runs social networks—paths diverging.

OpenAI alumni are united by a shared bet on the future: AGI will come, the window is limited, and now is a once-in-a-lifetime opportunity. Faith drives longer-lasting bonds than friendship because it directly aligns interests—once their bets are right, the entire network benefits.

This also makes the entry barrier into this circle quite subtle.

A product good enough to attract their funding isn’t hard to develop. But if you’re skeptical about AI’s future or your startup premise is “AGI is still far away,” even the best product will struggle to get their check.

Differences in worldview end conversations before handshake.

Four, From Builders to Investors

The paths of OpenAI alumni can be summarized into three categories.

Ilya, Aravind, and Mira all chose to start companies.

But they’re doing very different things. Aravind is in a fiercely competitive consumer business, Mira is building a tool platform, Ilya’s SSI has no product yet but is valued at $32 billion, betting on the word “safety” itself.

Leopold and Zero Shot chose to invest.

Leopold entered the public markets; Zero Shot is an early-stage VC. Both are about externalizing judgment into capital rather than executing themselves. This is rare among OpenAI alumni, but worth a separate look: someone willing to bet without building usually means their judgment is so clear that they don’t need to explore through action.

People often think the highest form of genius is creation. But this group offers another answer: when judgment is clear enough, dispersing cognition across multiple directions and letting capable executors build is a more efficient choice.

Leopold’s report is titled “Situational Awareness,” a military term describing a pilot’s real-time perception of the battlefield.

A pilot’s situational awareness determines their actions two seconds later; losing it means death. These people, coming from OpenAI, possess this battlefield awareness of AI. They know the trend, where the high ground is, which trenches lead to dead ends.

What they’re doing now is deploying based on this understanding.

The smartest in the era are choosing to go all-in, indicating they believe the answer is already clear enough—so clear that further testing isn’t necessary.

Click to learn about Rhythm BlockBeats’ recruitment:

Join the official Rhythm BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Twitter Official Account: https://twitter.com/BlockBeatsAsia

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin