Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Molotov cocktails and "The Lord of the Rings": Sam Altman's attack, the first real explosion of the AI era
Writing by: Weisha Zhu
This is not an ordinary tech news article. This is the first real explosion of reality in the AI era.
On April 11, 2026, Beijing time, OpenAI CEO Sam Altman publicly stated in a blog: the day before, local time, a 20-year-old man threw a Molotov cocktail at his San Francisco residence. The device bounced off and caused no casualties. Police quickly arrested the suspect.
Currently, there is no direct evidence linking this attack to AI controversies—but Altman himself explicitly connected it to societal anxiety caused by AI.
Why? Because he understands: this fire is not burning his house’s exterior wall, but the long-standing collective complacency of the AI elite circle.
In recent years, debates about AI have never stopped: accelerationism vs decelerationism, Musk vs Altman, internal conflicts within OpenAI, IPO plans, effective altruism… but these are confined within elite circles—papers, tweets, court documents. Ordinary people are just distant spectators.
The Molotov cocktail shattered all of this.
It’s not theory, not petition, not an open letter. It’s a 20-year-old expressing, in an extreme, mistaken, yet painfully real way, a sense of despair pushed to the limit:
“I cannot influence the direction of AI through any normal channels, so I chose the most primitive violence.”
This incident became a watershed not because it “might” be related to AI, but because—societal anxiety triggered by AI has, for the first time, turned from an abstract concept into real physical flames.
Previously, we said “AI might cause social unrest,” that was prediction; now, the Molotov has landed on Silicon Valley’s lawn.
From now on, any discussion about AI’s social impact that still pretends it’s just a topic for conference rooms or Twitter is self-deception.
The Molotov cocktail is a serious violation of the rule of law, completely unjustified. Altman publicly shared family photos, admitted his flaws, and called for reducing confrontation—this restraint is commendable.
But if we only condemn violence and avoid addressing the core questions of “why him? why now?”—then we are avoiding reality.
Altman is the leader of the world’s most well-known AI company, a frontrunner in the AGI race, a symbol of “accelerationism.” In the eyes of countless ordinary people, he is the one deciding the future for all humanity—even if he himself might not see it that way.
When a person is regarded as “the one wielding the magic ring,” and his company is shifting from non-profit to full profit, preparing for an IPO, and facing lawsuits from former partners, the extreme hostility he faces, though absolutely unacceptable, does not come out of nowhere.
The Molotov is an ugly expression. But the root cause is real: too many people feel they have completely lost their voice in the AI era, and the pace of technological change has long exceeded society’s psychological capacity to cope.
First layer: Collapse of jobs and dignity. Cars replaced horses, and AI replaces human cognitive labor itself.
Just a few days ago—April 7, 2026—Anthropic released Claude Mythos Preview, called internally “the strongest Claude ever.”
Its capabilities in code understanding, complex reasoning, and vulnerability discovery have made a shocking leap, capable of autonomously finding thousands of high-risk zero-day vulnerabilities, including:
It can even connect multiple vulnerabilities to execute privilege escalation attacks.
This model is not yet open to the public. Anthropic explicitly states it is “too powerful, too dangerous,” fearing malicious use for cyberattacks, so it is limited to a few partners for defensive security research—this is the Project Glasswing plan.
Even more terrifying: if it were fully open, the entire industry—security audits, penetration testing, code review—could collapse instantly. Millions of jobs relying on expertise could be replaced by AI in a very short time.
This uncertainty—“jobs that are safe today might be collectively lost tomorrow”—is creating real panic in countless minds.
Second layer: Concentration of power. Key AI development decisions are made by a few labs and tech giants. Ordinary people have no effective way to “disagree.”
Third layer: Intergenerational injustice. Today’s AI path is led by elites and capital, but all future consequences will be borne by the next generations, and the ones after that. They have never voted but must pay the price for this transformation.
These three layers of fear stack upon each other, compounded by fierce internal elite disputes (acceleration vs pause, Musk vs Altman). To ordinary people, it’s just “gods fighting.” When normal channels can’t express dissatisfaction, distorted ways of “speaking out” emerge.
Altman reflected on mistakes and called for decentralization in his blog. These personal statements have their value.
But a Molotov cocktail has already proven: personal reflection and touching photos cannot save this spreading crisis.
We need at least four serious institutional responses:
Transparency and substantive participation: open algorithms, independent third-party audits, mechanisms for public meaningful constraints—not just superficial consultation, but real power sharing.
Societal buffer mechanisms: large-scale retraining programs, transitional income support, deep reforms in education systems. Transforming most people from “AI replaced” to “effective AI users.”
Balanced governance framework: avoiding overregulation that stifles innovation, while enforcing external checks. True “democratization of AI” cannot stay just slogans.
Reducing hostility: tech leaders, critics, media—all should stop inciting zero-sum thinking. Whoever keeps stoking fires is fueling the next Molotov.
These suggestions are not new. But the Molotov has made their urgency shift from “should do” to “must do immediately.”
In the past, the biggest problem with AI discussions was: elites speaking to themselves, societal anxiety fermenting in silence.
Now, silence has been broken—by the ugliest, most dangerous means.
If we only condemn violence and then continue writing papers, tweeting, and litigating, the next Molotov might not bounce back “luckily” again.
Altman’s family photos are touching. But true safety has never come from a photo, but from a system track that allows everyone—including that 20-year-old attacker—to see future hope.
Fear is spreading rapidly.
AI has arrived too fast, too powerful, and too unfair. Fast enough that we must lay tracks and build buffers with institutions. Otherwise, the next “explosion” will no longer be metaphorical.
This fire did not kill anyone, but it burned through a dangerous illusion, making clear:
The future of AI can no longer be decided by a few people alone.