Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Recently, there are times when I find myself thinking about the duality of a certain tech entrepreneur. He operates as the pinnacle of a business that preaches the risk of human extinction, while at the same time accelerating that very process himself. In a 2016 New Yorker feature, he was described as the 31-year-old head of Y Combinator, stuffing guns, gold, and a gas mask into an escape bag. Even now, the question of what he was truly afraid of at that time still lingers.
A decade later, he has become the completed form of a business model that touts AI apocalypticism. He repeatedly invokes risks comparable to nuclear war, an existential crisis, and the continued existence of humankind. These inevitably become top headlines in the media and continue to provide free advertising for OpenAI. Fear is the most efficient lever for capturing attention. At the same time, he is pushing Worldcoin, which he claims is a salvation in the age of AI—by using iris scans to build a database of people around the world. Even if multiple governments issue stop orders, it is not a problem for him. What matters is only that he appears to be the person offering the only solution.
Regulation is also interesting. In testimony before Congress in 2023, he personally asked, “Please regulate us.” Since OpenAI had a technically overwhelming advantage at the time, strict regulation became the best barrier to block competitors. Then in 2024, as rival technologies began to catch up, his rhetoric changed. He started saying that excessive regulation would hinder innovation. Regulation is both his shield and his sword.
In November 2023, he was removed from the board. The stated reason was dishonesty. But five days later, with more than 700 employees threatening to move to Microsoft, he returned like a king. The board did not know about his hidden investment portfolio. Early investments in Stripe worth several hundred million dollars, substantial gains from Reddit’s IPO, and investments in Helion. Immediately afterward, OpenAI began negotiations with Helion over power-contract discussions. He insists he does not hold direct OpenAI shares, but he has built a personal, centered investment empire worth 2 billion dollars.
In Silicon Valley, business models like these are repeated again and again. Musk also launched xAI while warning that “AI is the devil.” After Zuckerberg’s $90 billion investment in the metaverse failed, he pivoted to a business that touts a grand vision of AGI. Peter Thiel, while building underground bunkers in preparation for the end times, is constructing one of the world’s largest surveillance tools with Palantir. Each of them plays a dual role—warning of the apocalypse while simultaneously urging it on.
The reason this kind of approach works is simple. First, it creates fear and controls its rhythm perfectly. Second, it makes AI’s inscrutability the source of authority. In the face of what people can’t understand, they instinctively hand over the right to explain to experts. Third, it replaces profit with meaning. If you can make people believe that human destiny is at stake, followers will voluntarily stop criticizing.
In February 2026, he signed a contract with the Pentagon right after declaring that he would not use AI in war. This is not hypocrisy; it is a demand embedded in his business model. The story can’t continue unless he plays both the roles at once—the benevolent savior and the ruthless prophet of the end. The real danger isn’t AI itself, but the people who believe they have the right to define humanity’s fate. His calling is nothing other than securing the position of the most certain winner in an uncertain future.