Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Recently, I find myself thinking a lot about people in Silicon Valley. Especially every time I hear about Sam Altman, a complicated mix of emotions bubbles up.
In 2016, when *The New Yorker* profiled him, it was still a simple story. The young president of Y Combinator, 31 years old, the darling of the industry. Owning five sports cars, piloting airplanes, always keeping an emergency escape bag at hand. Owning land in Big Sur, with a setup in place so he could flee by plane whenever necessary. At the time, people assumed it was merely a rich man’s pastime.
But now, ten years later, when you look at the same person, an entirely different picture comes into view.
Sam Altman’s business model is actually simple: taking a business and turning it into a holy war tied to humanity’s survival. Warning that AI will destroy humanity, while accelerating that process himself. Saying it isn’t about money—yet according to Bloomberg calculations, his personal wealth has reached about $2 billion.
What’s interesting is the source of this wealth. Early investments in Stripe bringing returns of several hundred million dollars, profits from Reddit’s IPO, and investments in a nuclear fusion company called Helion. He says, “The future of AI depends on breakthroughs in energy,” and immediately after, OpenAI begins negotiating a large-scale electricity procurement contract with Helion. The chain of profit is obvious to everyone.
The reason Sam Altman’s personal wealth keeps growing isn’t simply that he owns shares in OpenAI. He has built a vast investment empire centered on himself. His grand sermons—about the future of humanity—keeps injecting value into the domain of this empire.
The tactic of selling fear and salvation together wasn’t something he invented. It’s an old Silicon Valley tradition. Elon Musk also warns that “AI is summoning demons,” while running Tesla, the world’s largest robotics company. After the metaverse failed, Zuckerberg quickly switched to a new grand narrative—AGI. Peter Thiel builds underground bunkers in New Zealand for end-of-the-world preparedness, while owning Palantir, one of the world’s largest data surveillance companies.
Each of them warns that “the end is near,” while simultaneously playing the dual role of “accelerating the end.” This isn’t a split personality. It’s a business model that has been tested and verified as the most efficient one in capital markets.
Why does this ploy work every time? Because it targets—precisely—the weaknesses in human cognition.
First, it creates a fear that can’t be ignored. The risks of AI are real, but they present them in the most dramatic way possible. Next, they monopolize the interpretation of that fear. Since AI is a near-total black box for most people, they hand over the right to explain it to “the person who understands it best.” Finally, they use “meaning” to turn followers into the most loyal evangelists. In the face of a mission tied to humanity’s survival, doubting a leader’s motives makes you look small—like you’re nothing.
In November 2023, when the board fired Altman for “dishonesty,” what happened? CEO Greg Brockman resigned, and more than 700 employees asked to transfer to Microsoft. CEO Satya Nadella openly took Altman’s side. Altman returned like a king, dismissing nearly all the directors who had opposed him.
Why did a CEO officially labeled “dishonest” return without a scratch? Because he isn’t an ordinary CEO—he’s a “charismatic leader.” It’s a concept Weber proposed 100 years ago: that authority doesn’t come from rank or law, but from the leader’s “extraordinary personal charisma.” Followers trust him not because he did what’s right, but because he is who he is. This faith is irrational, and when the leader makes mistakes, followers’ first reaction isn’t to doubt the leader—it’s to attack the challenger.
After Altman’s reinstatement, OpenAI’s security team was dismantled immediately. Chief Scientist Ilya Sutskever stepped down. Yan Rykov, head of security, resigned in May 2024 and said on Twitter that “the company’s security culture and processes were sacrificed in order to release standout products.”
In front of a charismatic leader, facts, process, and security don’t matter. The only thing that matters is faith.
The same pattern shows up with regulation. In May 2023, Altman asked the U.S. Congress, “Please regulate us.” At that time, OpenAI was technically far ahead, so strict regulation could have excluded all potential rival companies. But once time passed and competitors like Google and Anthropic began catching up, he subtly changed his position on regulation. Now he emphasizes that “overly strict regulation suffocates innovation.”
When he believes he has absolute dominance, he calls for regulation; when that dominance fades, he screams about freedom. He is also trying to expand his influence to the very top of the industrial chain. He proposes a $7 trillion chip plan and seeks support from sovereign wealth funds like those of the United Arab Emirates. This goes far beyond a CEO’s typical job scope—it’s the behavior of an ambitious person trying to influence the world’s broader arrangement.
OpenAI’s transformation is also striking. Its original mission when founded in 2015 was “to ensure AGI is secured safely so it benefits all of humanity.” But in early 2024, the outside world noticed that the word “safely” had quietly been removed from the mission statement. Revenue exploded from tens of millions of dollars in 2022 to more than $10 billion per year in 2024, and its valuation surged from $29 billion to the $100 billion range.
When people look up at the night sky and begin talking about humanity’s fate, it’s best to first check where his wallet is landing.
In February 2026, right after saying his red line that “AI will not be used for war,” Altman signed a contract with the Pentagon. This isn’t hypocrisy—it’s a requirement embedded in his business model. A moral stance is part of the product, and commercial contracts are the source of profits. He has to simultaneously play both the benevolent savior and the ruthless prophet of doom. Because if he can’t fulfill both roles at the same time, his story can’t continue—and his “destiny” can’t be made clear.
Silicon Valley has become not just a place where technology is created, but a factory that produces modern myths. And the fact that Sam Altman’s net worth has reached $2 billion symbolizes just how profitable these myths are.
The real danger isn’t AI itself, but people who believe they have the right to define humanity’s destiny.