Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Just caught up on what might be one of the most damning investigative pieces about OpenAI leadership in a while, and honestly, it's hard to look away from the implications here.
So back in 2023, Ilya Sutskever—the chief scientist at OpenAI—compiled this extensive memo documenting concerns about Sam Altman's trustworthiness. We're talking 70 pages pulling from Slack logs, HR records, internal meetings. The opening line was blunt: Sam exhibits a consistent pattern of lying. Fast forward to now, and The New Yorker's investigation (Ronan Farrow and Andrew Marantz) has surfaced the memo along with 200+ pages of private notes from Dario Amodei, who was OpenAI's head of safety before founding Anthropic.
Here's what gets me: OpenAI was literally structured as a nonprofit specifically so that safety would come before profit. The whole premise was that someone needed to be able to shut the company down if things got dangerous. The entire architecture bet everything on one assumption—that the person running it had to be radically honest.
But according to the memo and interviews with over 100 people, that's not what happened. There are specific examples: Altman told the board GPT-4 features had passed security reviews when they hadn't. Board members found out the hard way. There was this thing with Microsoft and India where ChatGPT launched without completing required security checks. When confronted, Altman claimed the general counsel had approved something—the general counsel said they had no idea where that came from.
Amodei's notes paint a picture of a company gradually abandoning its original mission under commercial pressure. He documented how Microsoft inserted a clause into the 2019 investment deal—basically saying if someone else found a safer path to AGI, OpenAI would help them instead of competing. It was the safety guardrail he cared about most. Then he discovered Microsoft had negotiated veto power over that exact clause. On paper it looked good. In reality, it was dead on arrival.
There's this wild detail about the Superalignment Team. OpenAI announced they'd dedicate 20% of their computing power to it—potentially over a billion dollars worth. The rhetoric was heavy: without solving alignment, AGI could lead to human extinction. But people who actually worked on it said the real allocation was 1-2% of total capacity, using the oldest hardware. The team got disbanded without finishing anything.
When journalists asked OpenAI about their existential safety research team, the PR response was almost comedic: "That's not an actual thing." Altman himself said his intuition doesn't align with traditional AI safety approaches.
Meanwhile, there's this whole other story brewing. OpenAI's CFO Sarah Friar apparently told colleagues she thinks the company isn't ready for an IPO this year—too much procedural work, too much financial risk from Altman's $600 billion computing spending commitment over five years. She's not even convinced the revenue growth can support it. But Altman wants to push for Q4 IPO. And here's the kicker: Friar doesn't report to Altman anymore. As of August 2025, she reports to the CEO of OpenAI's applications business, who just went on medical leave. So you've got a company racing toward an $850 billion IPO with the CEO and CFO at odds, the CFO not reporting to the CEO, and her supervisor on leave. Even Microsoft executives were apparently frustrated, with one saying there's a real chance Altman ends up remembered like Bernie Madoff or SBF.
One former board member gave what might be the sharpest character assessment: Altman has this rare combination of desperately wanting to be liked in every face-to-face interaction while simultaneously showing near-sociopathic indifference to deceiving people. It's the perfect profile for a salesperson. Jobs had his reality distortion field, but even Jobs never told customers that not buying his product would kill the people they love. Altman has basically said that about AI.
The thing that makes this actually matter: if this were just drama at a regular tech company, it'd be gossip. But OpenAI isn't regular. They're developing what could be the most powerful technology in human history. The same technology that could reshape global economies or create bioweapons. Every safety mechanism has been gutted. The nonprofit mission is gone. The former chief scientist and former security head both don't trust the CEO. Partners are comparing him to fraudsters.
And under all this, one person unilaterally decides when to release models that could reshape humanity's future.
Gary Marcus (NYU AI professor, long-time safety advocate) put it plainly after reading the report: if OpenAI builds something that can create bioweapons or launch cyberattacks, are you comfortable with Altman being the sole decision-maker on whether to release it?
OpenAI's response was basically: "These are recycled stories with anonymous sources and personal agendas." Altman didn't address the specific allegations or deny the memo. Just questioned motives.
The arc is almost too neat: idealists start a nonprofit worried about AI risks. They make breakthrough tech. Capital floods in. Capital demands returns. The mission cracks. Safety team gets cut. Dissenters leave. Nonprofit becomes for-profit. The board transforms from a safety check into allies of the CEO. The company that promised 20% of computing power for humanity's safety now has PR saying that wasn't real.
Over a hundred people used the same word: unconstrained by truth.
And he's taking it public at $850 billion valuation.