I'm currently reading a 165-page book. The author, Leopold Aschenbrenner, accurately predicted the current AI development trends two years ago.


He was fired from OpenAI in April 2024, and by June, he wrote this book, *Situational Awareness*, which is essentially a fundraising document.
In September, he launched his own hedge fund. The fund grew from over $200 million in one year to $5.5 billion, a 24-fold increase.
In the first half of 2025, it achieved a net return of 47%.
As I read, I started to wonder: what gives him that confidence?
What allows a 22-year-old to write about today’s world as if he’s seen the future?
The reason he can see the future is because he’s standing in the room where it’s being created.
His circle in San Francisco, working directly under OpenAI’s Chief Scientist Ilya Sutskever on the Superalignment team.
This book is his tribute to Ilya.
Every sentence he wrote two years ago, looking back today, almost all of it has come true.
He said that in the short term, AI’s biggest shortages aren’t algorithms, but computing power, HBM memory, data centers, and electricity.
He said the real bottleneck lies in CoWoS advanced packaging.
He said the U.S. power grid will become the first obstacle that stalls everyone.
He predicted a “trillion-dollar cluster.” Later, all these views became headlines.
OpenAI named that cluster Stargate.
But that’s just the appetizer. He wrote in the book:
By 2027, AGI (Artificial General Intelligence) will arrive.
The logic is this: over the past four years, AI has grown from a “preschooler” GPT-2 to a “smart high school student” GPT-4.
In another four years, he says, AI will be able to replace human researchers and train AI itself.
Once AI can research AI on its own, a decade’s worth of human algorithm iterations can be completed in a year.
The “intelligence explosion” will start from that moment.
By then, humans won’t understand what AI is doing anymore.
The code it writes, the decisions it makes—how do we know it’s not deceiving us?
Leopold offers three remedies in the book.
1. Weak supervision, strong AI.
Use a less capable AI, understandable by humans, to supervise the far more powerful AI.
The gamble is that the weaker one can still detect if the stronger one is malicious.
Leopold himself is a co-author of this paper.
2. AI debate each other.
Let several AIs confront each other, challenge errors, expose lies.
Humans act as quiet judges, using their inconsistencies to identify the one lying.
3. Mechanical interpretability.
During training, remove dangerous parameters first.
Then directly open up the AI’s “brain” to see what it’s thinking.
Create an “AI lie detector,” find its inner “truth direction.”
Leopold admits this is a moonshot-level challenge.
Reading this, I finally understand why he ends with a photo of Oppenheimer.
He’s treating this as a new Manhattan Project.
He also admits that these three paths, in essence, are just “patches.”
None truly solve the problem.
They’re just bets that humanity can hold on until the day when alignment challenges are outsourced to AI itself.
What we’re doing now isn’t “solving AI safety,” but “hoping AI will solve AI safety for us.”
Sounds a bit like a troubled romance, doesn’t it?
Knowing where it’s wrong but still betting he’ll change.
Back to investing.
The most valuable part of this book isn’t the specific year “2027 AGI.”
The margin of error is large—maybe a year late, maybe half a year early.
What’s most valuable is that it clearly explains the entire bottleneck hierarchy of the AI industry over the next decade:
Electricity > Advanced Packaging / HBM > Computing Power > Algorithms > Applications.
The higher up, the scarcer; the lower, the more crowded.
Leopold himself has personally verified this with real money in the open market.
As I close the book, I think:
Some books, reading a year earlier, might be a matter of life and death.
Fortunately, it’s not too late now.
“See you in the desert, friend.”
View Original
post-image
post-image
post-image
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin