Recently, I’ve been thinking about a question: why can some people talk about the end of the world coming, while at the same time preparing for it—and still manage to profit handsomely from it?



After reading a lot of coverage about Sam Altman, I realized this guy might be the most ingenious product ever made by Silicon Valley’s machine. Ten years ago, he bought five sports cars, rented private jets to fly around, and also stocked up on guns, gold, potassium iodide, and a piece of land in California—classic end-of-the-world survival fanatic stuff. But now? He has become the best at selling anxiety: warning that AI will destroy humanity while personally pushing that process forward.

His business logic is actually quite clear: package a business as a holy war about whether humanity survives. At OpenAI, he has taken this playbook to its absolute limit. He portrays AI’s “catastrophic risk” better than anyone else. When he testified before the Senate, he said people should be afraid of AI—every sentence could make the headlines, and every line served as free advertising for the company. Fear is the most efficient attention lever.

What’s interesting is that he also has a solution prepared: Worldcoin. When fear is planted, pitching the solution follows naturally. Using a silver sphere to scan people’s irises, claiming to pay them in the AI era. The story is moving, but multiple countries halted the project over privacy concerns. For him, that doesn’t matter—the key is successfully shaping the persona of “the only one with an answer.”

Even more interesting is his attitude toward regulation. When OpenAI’s technology was ahead, he proactively called for regulation, recommending an AI licensing system—so competitors could be kept out. But when Google and Anthropic caught up, he suddenly started saying that strict regulation would “stifle innovation.” This wind-changing maneuver points to the same target as the later $7 trillion chip plan he proposed: power and influence.

The boardroom turmoil in November 2023 makes the problem even clearer. He was kicked out for being “not transparent,” yet five days later he returned like a king—indeed with even more power. More than 700 employees jointly threatened to jump ship, and Microsoft’s CEO publicly took his side—this guy has essentially become a symbol of a certain kind of faith. Accused of hiding investment control rights and lying about safety procedures, these charges would be enough to get a typical CEO fired a hundred times over, but he’s fine—because he’s no longer an ordinary CEO; he’s a “charismatic leader.”

After his reinstatement, OpenAI’s security team was quickly disbanded. The chief scientist, Ilya, left, and the head of security, Jan Leike, also resigned, leaving behind a line: “To launch glossy, impressive products, the safety culture was sacrificed.” In front of a “charismatic leader,” facts don’t matter, procedures don’t matter, and safety doesn’t matter—what matters is only faith.

In 2024, Bloomberg added up the numbers: Sam Altman’s net worth is approximately $2 billion. He has always claimed he doesn’t have equity in OpenAI—only a symbolic salary. But this fortune comes from his investments over the past decade-plus: returns from Stripe investments amounting to hundreds of millions of dollars, the substantial gains brought by Reddit’s public listing, and the fusion company Helion, in which he holds a large position. What’s interesting is that, while he claims that AI’s future depends on energy breakthroughs, he bets on nuclear fusion. After that, OpenAI went to negotiate a large power-purchase agreement with Helion. He said he avoided the negotiations, but everyone can see what that chain of interests really is.

He doesn’t directly hold OpenAI equity, but he has built a massive personal investment empire around OpenAI. Every grand sermon about the future of humanity injects value into this empire. That survival pack stuffed with guns, gold, and antibiotics—the land in Big Sur that can be flown to at any moment—now seems to have taken on a new meaning.

He never tries to hide all of this. His doomsday obsession is real, but at the same time, he is also the one working hardest to bring the doomsday about. These two things don’t contradict each other, because in his logic, the end of the world doesn’t need to be stopped—only anticipated early so he can get ahead. He is obsessed with playing the role of the only person who sees the future clearly and prepares for it. Whether it’s preparing a survival pack or building a financial empire, at bottom it’s the same thing: in an uncertain future that he personally drives forward, positioning himself as the most certain winner.

In February 2026, right after he finished endorsing the red line that “AI must not be used for war,” he turned around and signed a contract with the Pentagon. This isn’t hypocrisy—it’s an inherent requirement of the business model. Moral stance is part of the product; business contracts are the source of profit. He needs to play both a compassionate savior and a ruthless doomsday prophet at the same time. Only then can his story continue, and his “destiny” become unmistakably clear.

The truly dangerous thing was never AI itself—it was always those people who believe they have the right to define humanity’s fate.
WLD-0.15%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin