Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I just noticed something that’s quite interesting. Over the past five years, the AI industry has created a powerful narrative about the existential threats posed by AGI—not because there is definitive evidence, but because it is practically useful. This story has helped them attract funding from investors, draw the government’s attention, and create a sense of urgency throughout the industry. But the problem is that once that story gets out into the world, it’s no longer under the control of its creators.
It comes back around—and sometimes it shoots at you. Take what happened in the first week of April. At 03:40 on April 10 in San Francisco, a 20-year-old man named Daniel Moreno-Gama threw a Molotov cocktail at Sam Altman’s apartment door. The fire burned around the door, and then he fled. About an hour later, the same person appeared near OpenAI’s office and threatened to set a fire again before being arrested.
Two days later, at 01:40 on April 12, a Honda was parked next to another house of Altman’s on a hillside. The passenger extended their arm out of the window and fired at the house. Authorities arrested two suspects: Amanda Tom and Muhammad Tariq Hussen. Both were charged with reckless shooting—two attacks within forty-eight hours.
The person behind the first attack, Daniel Moreno-Gama, was a pessimist about AI even before the attack. Before the attack, he wrote a social media article using images from Dune to argue that failures in aligning AI are a risk to survival, and he criticized tech leaders who were risking humanity’s fate in pursuit of “post-humanism.”
His point was that what OpenAI and other AI leaders say isn’t true belief—it’s a strategy. Creating a narrative about threats at that level of existence lets them say three things at the same time: “We are the leaders in the most dangerous technology. We are the most responsible. So the funding should flow to us.”
But once those words are out, they don’t stay in investor conference rooms or polished, glass-walled offices. Some people take the narrative seriously, and in some cases, it becomes a straightforward instruction to act.
Look at what’s happened since the end of last year. In December 2024, UnitedHealthcare CEO Brian Thompson was shot and killed. The suspect, Luigi Mangione, graduated from a top university and left a message criticizing the health insurance industry. The case sparked bizarre reactions on social media—many users expressed sympathy for the perpetrator, and some even praised him as a symbol of resistance.
After that, the door was kicked open. According to Fortune, executives’ security priorities shifted from “benefits” to “a survival necessity.” The rate of physical attacks against executives of large companies increased by 225% since 2023. In companies in the S&P 500, 33.8% reported executive security spending in 2025 compared with 23.3% in 2020.
The average cost for companies providing security services is 130,000 US dollars, up 20% year over year, and it has doubled within five years.
But the AI industry is different—not because security risks have increased, but because the AI creators themselves believe this technology could destroy civilization. The words carry weight.
In 2024, the combined security spending for the CEOs of the 10 leading technology companies exceeded 45 million US dollars. Mark Zuckerberg spent more than 27 million US dollars. That was more than the combined security spending of the CEOs of Apple Google and the other four companies. Sundar Pichai of Google spent 8.27 million US dollars, up 22% year over year. Huang Jen-hsun of NVIDIA spent 3.5 million US dollars in 2025, up 59%. These latest figures point to a rapidly rising trend.
In a 2025 Pew Research survey, only 16% of the 28,333 respondents worldwide said they felt excited about AI development, while 34% said they felt concerned. Even more interesting is that people with higher education and higher income turned out to be even more worried about AI getting out of control. The people who understand it the most are the most afraid of it.
I just saw a report that Ron Gibson, an Indianapolis city council member, had been shot 13 times overnight at his home. His 8-year-old son woke up to gunfire. At the door there was a handwritten note that said, “Do not build a data center.” The FBI has already gone in to investigate.
As researchers from George Washington University point out, data centers are becoming targets for attacks by anti-technology groups. This fear isn’t something hidden within the industry, but it also isn’t discussed openly.
We go back to 2016. Sam Altman built an underground bunker in Wyoming: 1,200 square meters, three stories, 500 kilograms of gold, 5,000 potassium iodide tablets, 5 tons of freeze-dried food, and 100,000 rounds of ammunition. In the same year, OpenAI was just founded. He said on stage that AI was humanity’s greatest opportunity, but at the same time he stockpiled enough weapons for a civilian force. This was a two-way bet: publicly betting that AI would succeed, and quietly preparing in case AI got out of control.
After the first attack, Altman posted a blog. He posted a photo of himself with children and said he hoped the image would stop the next person from throwing a Molotov cocktail at his house. He acknowledged the moral stance of the dissenters and called for a public discussion that was “less direct and less metaphorical” at the same time.
But the most interesting part is that he wrote, “I’ve underestimated the power of media narratives and too-little of the lower-level words.” He knew that the narrative had more power than he expected. He knew exactly how it would come back around.
One week before the first attack, The New Yorker published an in-depth interview article about Altman by Ronan Farrow and Andrew Marantz. They interviewed more than 100 people with firsthand knowledge. The main argument was basically just two words: “unbelievable.” The article quoted a former OpenAI board member calling Altman “a person with an antisocial personality” and “not bound by the truth.” Many colleagues said he changed his stance on AI safety again and again.
In his post, Altman admitted he tends to avoid conflict. He constructed a public narrative that “AI is an existential threat” to use as a tool for fundraising and for negotiating regulation—but in the end, this tool got out of his hands.
On February 27 this year, OpenAI signed a contract with the U.S. Department of Defense to allow the U.S. military to deploy ChatGPT on secret national defense networks. On the same day, Altman also expressed support for Anthropic’s position limiting military use of AI.
The anti-QuitGPT movement was reported to have collected more than 1.5 million participants. On March 21, about 200 protesters marched in San Francisco across the offices of Anthropic, OpenAI, and xAI, calling on all three CEOs to commit to pausing the development of advanced AI. London held the largest anti-AI march it had ever seen.
This is what’s called the boomerang effect of narratives. Altman’s bunker in Wyoming and the security personnel he hired were designed to deal with two different types of risk—one coming from outsiders, and one coming from what he himself was building. Privately, he took both risks seriously, but publicly he admitted only one risk.
The narrative that had been created was released—and it came back and hit the door.