Recently, there are times when I find myself thinking about the duality of a certain tech entrepreneur. He operates as the pinnacle of a business that preaches the risk of human extinction, while at the same time accelerating that very process himself. In a 2016 New Yorker feature, he was described as the 31-year-old head of Y Combinator, stuffing guns, gold, and a gas mask into an escape bag. Even now, the question of what he was truly afraid of at that time still lingers.



A decade later, he has become the completed form of a business model that touts AI apocalypticism. He repeatedly invokes risks comparable to nuclear war, an existential crisis, and the continued existence of humankind. These inevitably become top headlines in the media and continue to provide free advertising for OpenAI. Fear is the most efficient lever for capturing attention. At the same time, he is pushing Worldcoin, which he claims is a salvation in the age of AI—by using iris scans to build a database of people around the world. Even if multiple governments issue stop orders, it is not a problem for him. What matters is only that he appears to be the person offering the only solution.

Regulation is also interesting. In testimony before Congress in 2023, he personally asked, “Please regulate us.” Since OpenAI had a technically overwhelming advantage at the time, strict regulation became the best barrier to block competitors. Then in 2024, as rival technologies began to catch up, his rhetoric changed. He started saying that excessive regulation would hinder innovation. Regulation is both his shield and his sword.

In November 2023, he was removed from the board. The stated reason was dishonesty. But five days later, with more than 700 employees threatening to move to Microsoft, he returned like a king. The board did not know about his hidden investment portfolio. Early investments in Stripe worth several hundred million dollars, substantial gains from Reddit’s IPO, and investments in Helion. Immediately afterward, OpenAI began negotiations with Helion over power-contract discussions. He insists he does not hold direct OpenAI shares, but he has built a personal, centered investment empire worth 2 billion dollars.

In Silicon Valley, business models like these are repeated again and again. Musk also launched xAI while warning that “AI is the devil.” After Zuckerberg’s $90 billion investment in the metaverse failed, he pivoted to a business that touts a grand vision of AGI. Peter Thiel, while building underground bunkers in preparation for the end times, is constructing one of the world’s largest surveillance tools with Palantir. Each of them plays a dual role—warning of the apocalypse while simultaneously urging it on.

The reason this kind of approach works is simple. First, it creates fear and controls its rhythm perfectly. Second, it makes AI’s inscrutability the source of authority. In the face of what people can’t understand, they instinctively hand over the right to explain to experts. Third, it replaces profit with meaning. If you can make people believe that human destiny is at stake, followers will voluntarily stop criticizing.

In February 2026, he signed a contract with the Pentagon right after declaring that he would not use AI in war. This is not hypocrisy; it is a demand embedded in his business model. The story can’t continue unless he plays both the roles at once—the benevolent savior and the ruthless prophet of the end. The real danger isn’t AI itself, but the people who believe they have the right to define humanity’s fate. His calling is nothing other than securing the position of the most certain winner in an uncertain future.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments