Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I just read something that left me pretty shaken. The New Yorker has just published a massive investigation in which journalists like Ronan Farrow obtained internal OpenAI documents revealing a worrying pattern: apparently Sam Altman has been systematically lying to the board of directors and executives about critical safety issues.
The starting point is brutal. Ilya Sutskever, OpenAI’s chief scientist, compiled a 70-page document a couple of years ago based on Slack messages, HR communications, and meeting minutes. His conclusion on the very first line: Sam shows a consistent pattern of lying. A specific example that appears in the investigation: in December 2022, Altman assured the board that several GPT-4 features had passed safety review. When they asked to see the approval documents, they found that two of the most controversial features were never reviewed by the safety panel.
But what really caught my attention was what they found in the personal notes of Dario Amodei, the founder of Anthropic and a former head of safety at OpenAI. More than 200 pages documenting how the company stepped back, step by step, under commercial pressure. A key detail: when Microsoft invested in 2019, it negotiated a “merger and assistance” clause that supposedly guaranteed that if another competitor found a safer route to AGI, OpenAI would help. It sounds good on paper. The problem is that Microsoft also got a veto right over that same clause, turning it into an empty promise from the day it was signed.
There’s something that sounds almost absurd but is completely real: OpenAI’s “superalignment” team. Altman publicly announced that they would dedicate 20% of existing compute capacity to researching AI alignment, with a potential value exceeding one billion dollars. It was a serious announcement, mentioning human extinction risks and all that. But when journalists spoke to four people who worked on that team, the reality was different: the compute power allocated was only between 1% and 2% of the total, and it was old hardware. The team was disbanded without completing its mission.
While this was happening, OpenAI’s CFO, Sarah Friar, had serious disagreements with Altman over a possible IPO. Friar believes the company isn’t ready to go public this year, considering the amount of work still pending and the financial risks of Altman’s commitment to spend $600 billion on compute capacity over five years. But here’s the strange part: Friar no longer reports directly to Altman—she now reports to Fidji Simo, who has just taken a health leave. A company preparing for an IPO with those kinds of internal dynamics is, well, complicated.
A former board member described Altman with two traits at the same time: a genuine desire to please in every face-to-face interaction, but at the same time an almost sociopathic indifference to the consequences of deceiving others. That combination, according to the report, is rare in people but perfect for a salesman.
What worries me is that this isn’t just corporate gossip. OpenAI is developing what they themselves describe as possibly the most powerful technology in human history. Technology that could reshape the economy, create biochemical weapons at scale, or carry out cyberattacks. And the safety safeguards that were supposedly protecting against these risks have been dismantled. The former chief scientist and the former head of safety consider the CEO not trustworthy. Microsoft executives compare him to SBF.
OpenAI’s response to The New Yorker was dismissive: they said the article reiterates events that have already been reported using anonymous claims. Altman didn’t respond to the specific allegations, only questioned the motives of the sources.
Ten years of OpenAI, summarized: a group of idealists creates a nonprofit organization to protect humanity from AI risks. They achieve extraordinary breakthroughs. Massive capital flows in. The mission starts to slip. The safety team is disbanded. Critics disappear. The nonprofit structure transforms into a for-profit entity. The board that once had the power to shut down the company is now filled with CEO allies. The company that promised to dedicate 20% of its computing power to protecting human safety now has spokespeople saying that existential safety research “is not really a thing.”
And the protagonist of this story is about to take OpenAI public with a valuation above $850 billion. One hundred eyewitnesses gave him the same label: he’s not subject to the truth.