Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Recently reading a 165-page book. The author Leopold Aschenbrenner, who accurately predicted the current AI development trend two years ago.
He was fired from OpenAI in April 2024, and in June wrote this book, "Situational Awareness," essentially a fundraising document.
In September, he started his own hedge fund. The fund grew from over $200 million in one year to $5.5 billion, a 24-fold increase.
Net return in the first half of 2025 was 47%.
As I read, I started to wonder, what gives him the right?
What allows a 22-year-old to write about today’s world in 2024?
Being able to see the future is because he stands in the room where the future is being created.
In his San Francisco circle, working directly under OpenAI’s Chief Scientist Ilya Sutskever on the Superalignment team.
This book is his tribute to Ilya.
Every sentence he wrote two years ago, looking back today, almost all of them have come true.
He said, in the short term, AI’s biggest shortage isn’t algorithms, but computing power, HBM memory, data centers, and electricity.
He said the real bottleneck is hidden in CoWoS advanced packaging.
He said the U.S. power grid will become the first obstacle that stalls everyone.
He said a “trillion-dollar cluster” will emerge. Later, all these views became headlines.
OpenAI named that cluster Stargate.
In the second half of 2025, he quietly increased his holdings in Bitcoin mining farms.
Not because he’s bullish on the coin price, but because he sees that mining facilities already hold existing power contracts, ready data center sites, and high-power cooling.
What’s scarce in the AI era, Bitcoin mining farms already have.
Miners will transform into landlords of AI computing power.
His logic:
First layer, AI lacks electricity.
Second layer, those who control electricity are the most scarce.
Third layer, those who already hold electricity but are looked down upon by the market are gold mines.
But all of this is just the appetizer. He wrote in the book:
By 2027, AGI (Artificial General Intelligence) will arrive.
The logic is like this. Over the past four years, AI has grown from a “pre-kindergarten child” GPT-2 to a “smart high school student” GPT-4.
In four more years, he says, AI will be able to replace human researchers and train itself.
Once AI can research AI on its own, a decade’s worth of human algorithm iterations can be completed in a year.
“Intelligence explosion” will begin from that moment.
By then, humans won’t understand what AI is doing anymore.
The code it writes, the decisions it makes.
How do we know it’s not deceiving us?
Leopold offers three remedies in the book.
1. Weak supervision is strong. Use a less capable AI, understandable to humans, to supervise the far more powerful AI.
The gamble is that the weaker one can still detect if the stronger one is malicious.
Leopold himself is a co-author of this paper.
2. AI debates each other. Let several AIs confront each other, challenge, expose faults.
Humans act as quiet judges, using their inconsistencies to identify the liar.
3. Mechanical interpretability. During training, remove dangerous parameters first.
Then directly open up the AI’s “brain” to see what it’s thinking.
Create an “AI lie detector,” find its inner “truth direction.”
Leopold himself admits this is a moonshot-level difficulty.
Reading this, I finally understand why he ends with a photo of Oppenheimer.
He’s treating this as a new Manhattan Project.
He also admits that these three paths, in essence, are just “patches.”
None truly solve the problem.
They’re just bets that humanity can hold out until the day when alignment challenges are outsourced to AI itself.
What we’re doing now isn’t “solving AI safety,” but “hoping AI will solve AI safety for us.”
Sounds a bit like a troubled love affair?
Knowing where it’s wrong, but still betting he’ll change.
Back to investment.
The most valuable part of this book isn’t the specific “2027 AGI” year.
The margin of error is large—maybe a year late, maybe half a year early.
What’s most valuable is that it clearly explains the entire decade’s bottleneck hierarchy in the AI industry:
Electricity > Advanced Packaging / HBM > Computing Power > Algorithms > Applications.
The scarcer the higher you go, the more crowded the lower levels.
Leopold personally verified this with real money in the open market.
As I close the book, I think:
Some books, reading a year earlier might be life-changing.
Fortunately, it’s not too late now.
“See you in the desert, friend.”