Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I just noticed some interesting details about Sam Altman and what he’s doing with OpenAI. The contradiction between what he says publicly and what he does behind the scenes is anything but ordinary.
Let’s start with a recent incident in San Francisco on April 10, around 03:40. A 20-year-old man named Daniel Moreno-Gama threw a Molotov cocktail at Altman’s apartment door. The fire burned around the door, and he fled. About an hour later, the same person appeared near OpenAI’s office and threatened to set it on fire again before being arrested.
Two days later, on April 12 at 01:40, a Honda was parked beside another house of Altman’s on a hillside. The passenger extended an arm out of the window and fired at the house. CCTV recorded the license plate number. Two suspects were later arrested; both were charged with reckless shooting.
What’s interesting is that the suspect in the first attack—Moreno-Gama—has concerns about advanced AI. He posted on social media, citing Dune to argue that AI misalignment failures are a risk to the existence of humanity. He criticized technology leaders who risk humanity’s fate in pursuit of “post-humanism.”
After the first attack, Altman wrote a blog post acknowledging the moral position of the dissenters and calling for a public debate that is “less direct, both literally and figuratively.” He also responded to an in-depth article by The New Yorker published shortly before the attack, writing that “I’ve underestimated the power of media narratives, and the words are too weak.” Two days later, his residence was shot at again.
This ties into a broader trend. In December 2024, the CEO of UnitedHealthcare was shot and killed outside a hotel. The suspect was a graduate from a top university and left a message criticizing the health insurance industry. The case drew unusual reactions on social media, with many users expressing sympathy for the perpetrator.
After this incident, executive security shifted from a “benefit” to a “survival necessity,” according to Fortune. The rate of physical attacks on executives at large companies increased by 225% since 2023. In S&P 500 companies, 33.8% reported executive security expenses in their 2025 financial reports, up from 23.3% in 2020.
The average security expense for companies is 130,000 dollars, up 20% year over year, and it has doubled within five years. The AI industry has been the latest—and most prominent—recipient of the most visible effects of this trend.
Security costs for CEOs at the top 10 technology companies in 2024 totaled more than 45 million dollars. Mark Zuckerberg alone spent more than 27 million dollars—more than the combined spending of the CEOs of Apple, Google, and four other companies. Jensen Huang of NVIDIA spent 3.5 million dollars in 2025, up 59% from the previous year. Suna Pichai of Google spent 8.27 million dollars, up 22%.
But the AI industry has something other industries don’t: even the creators themselves believe this technology could destroy civilization. In a 2025 Pew Research Center survey of 28,333 respondents worldwide, only 16% said they were excited about AI, while 34% said they were concerned. Even more counterintuitive is that people with higher education and higher income are more worried about AI getting out of control.
Not long ago, the home of Indianapolis city council member Ron Jibson was shot 13 times. His 8-year-old son woke up to gunfire. At the door was a handwritten note saying, “No data centers.” The FBI moved in to investigate. Researchers from George Washington University pointed out that data centers are becoming targets for radical groups that are anti-technology and anti-government.
This fear isn’t a secret in the industry, but it isn’t spoken about openly. Altman built a bunker in Wyoming in 2016—the same year OpenAI announced its founding. At the same time he said on stage that AI is humanity’s greatest opportunity, he was stockpiling weapons and preparing for civilian forces.
OpenAI’s main storyline over the past five years has been emphasizing the seriousness of the “existential-level” threat of AGI, so that governments take regulation seriously, investors understand the risks, and the industry recognizes that this race can’t be missed. The line “This is the most dangerous technology in human history”—once it was shared—would not stay confined to the technology world. It would be passed down, and in some cases become a directive to act.
Moreno-Gama wrote on Instagram: “Exponential progress plus misalignment equals survival risk.” This structure of argument traces back to key documents in AI safety research; many of them are supported or endorsed by OpenAI.
On February 27 this year, OpenAI signed a contract with the U.S. Department of Defense, enabling the military to deploy ChatGPT on classified national defense networks. On the same day, Altman voiced support for Anthropic’s position to limit the use of military AI.
Uninstallations of ChatGPT rose by 295% in one day. One-star reviews surged by 775% within 24 hours. The QuitGPT movement has accumulated more than 1.5 million participants. On March 21, about 200 protesters marched in San Francisco, crossing paths with Anthropic, OpenAI, and xAI, calling on the three CEOs to halt the development of advanced AI. At the same time, London held the largest anti-AI march ever.
Altman’s bunker and security personnel are designed to deal with two different types of risk: one from external individuals, and the other from what he himself is building. He takes both risks seriously in private, but publicly acknowledges only one.
The in-depth article by The New Yorker published in the week of the first attack—based on interviews with more than 100 people—boiled down its core argument to just two words: “unreliable.” The article quoted a former OpenAI board member calling Altman “a person with antisocial personality” and “not bound by truth.” Several colleagues described how he repeatedly changed his stance on AI safety and reshaped power whenever necessary.
In his post, Altman admitted that he tends to avoid conflict. He created a public narrative that “AI is an existential threat” to use as a fundraising tool and as leverage for regulatory negotiations—but in the end, this tool slipped out of his hands, spun around, and ended up knocking on his own door.