Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I just read an investigation published yesterday in The New Yorker that left many people in shock. Ronan Farrow and Andrew Marantz interviewed over 100 people, obtained never-before-disclosed internal memos, and even over 200 pages of personal notes from Dario Amodei ( founder of Anthropic, who left OpenAI ). The result is basically a bomb.
It all starts with a 70-page document compiled by Ilya Sutskever, OpenAI's chief scientist, in the fall of 2023. He gathered Slack messages, communications with HR, meeting minutes. The goal? To answer a simple question: Can Sam Altman, who possibly controls the most dangerous technology in history, be trusted? Sutskever's answer in the first line: Sam demonstrates a consistent pattern of... lies.
The examples are very specific. In December 2022, Altman assured the board that GPT-4 functionalities had already undergone safety review. When they requested the approval documents, they discovered that two of the most controversial (custom tuning and personal assistant deployment ) had never been approved. Then there was the case in India, where Microsoft launched ChatGPT before safety reviews were completed. Sutskever also noted that Altman told CTO Mira Murati that the safety process wasn't that important because the legal counsel had already approved it. When Murati checked with the legal counsel? He responded: "I have no idea where Sam got that from."
But the coldest part is about the "superalignment" team. In mid-2023, Altman contacted a Berkeley student researching AI alignment, very concerned about the issue, talking about a $1 billion research prize. The guy dropped out of college, joined OpenAI. Then Altman changes his mind: creates an internal team instead of prizes, and publicly announces he will dedicate 20% of the company's computing capacity (potential of over 1 billion). Extremely serious language, talking about "human extinction" if alignment isn't solved.
In reality? Four people who worked there say only 1-2% of total capacity was allocated, with outdated hardware. The team was dismantled without completing anything. When journalists asked to interview those responsible for "existential safety" research, the press office's response was: "That’s not really a thing that exists."
And there's more. Sarah Friar, OpenAI's CFO, strongly disagreed with Altman about the IPO. She believes it's not ready yet — many pending bureaucratic hurdles and the risk of Altman's promise to spend $600 billion on computing capacity is too high. But Altman wants to accelerate to Q4 of this year. The absurd part? Friar no longer reports directly to Altman. Since August 2025, she reports to Fidji Simo, CEO of app business. And Simo has been on medical leave since last week.
A former board member described Altman like this: he has an extreme desire to please in face-to-face interactions, but at the same time an almost sociopathic indifference to the consequences of deceiving. This rare combination. But for a salesperson? It's the perfect gift. Even Microsoft executives can't take it anymore, saying Altman "distorted facts, broke promises, and constantly revoked agreements." One even compared him to Bernie Madoff or SBF.
Now, why does this matter so much? Because OpenAI is not a normal tech company. It is developing technology capable of restructuring the global economy, creating biochemical weapons at scale, or launching cyberattacks. All safety mechanisms are basically formal now. The non-profit mission has turned into a race for the IPO. The former chief scientist considers Altman untrustworthy. The head of safety left and founded Anthropic over fundamental disagreements about how AI should be developed.
Gary Marcus, AI professor at NYU, later wrote: if an upcoming OpenAI model manages to create biochemical weapons or catastrophic cyberattacks, do you really feel safe letting Altman decide whether to release it or not?
OpenAI's response? Basically ignored the specific allegations, didn't deny the memos, only questioned the motives of the sources. Altman didn't respond directly.
Ten years of OpenAI in one paragraph: a group of idealists creates an NGO concerned about AI risks, achieves notable advances, attracts massive capital, capital demands returns, mission cedes space, safety team dismantled, dissenters expelled, the non-profit structure turns into a profit-driven entity, the board that could shut down the company now full of CEO allies. Over a hundred witnesses used the same label for the protagonist: "not constrained by the truth." And now it's heading for an IPO valued at over $850 billion.