Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I just read something that left me quite unsettled about what is really happening at OpenAI. It turns out that recently The New York Times published an investigation where journalists like Ronan Farrow interviewed over 100 people and obtained internal documents that had never been revealed before. The story that emerges from there is much darker than what was publicly known.
The most striking detail: Ilya Sutskever, OpenAI’s chief scientist, wrote a 70-page memo a couple of years ago where he basically documented a consistent pattern. His initial conclusion was straightforward: Sam exhibits a behavior of systematically lying to executives and the board. This is not speculation; these are concrete cases. For example, in December 2022, Sam assured the board meeting that GPT-4’s features had passed security review. When they asked to see the documents, they discovered that two of the most controversial features had never been approved by the security panel.
But what caught my attention even more was what they found in Dario Amodei’s private notes, the founder of Anthropic who worked at OpenAI. Over 200 pages documenting how the company was retreating step by step under commercial pressure. Amodei discovered that Microsoft inserted a veto clause that essentially nullified the security guarantee that he valued most in the entire deal. From the day of signing, it was rendered meaningless.
There’s another detail that will give you chills. OpenAI publicly announced that it would allocate 20% of its computing capacity to a superalignment team, with a potential value exceeding one billion dollars. The announcement was extremely serious, talking about existential risks to humanity. The reality, according to four people who worked on that team: the actual computing power was between 1% and 2%, and it was the oldest hardware. Later, the team was disbanded without completing its mission.
The absurd part is that when journalists asked to interview the personnel responsible for existential safety research, the PR response was that this isn’t something that really exists at OpenAI.
And this happened just as it was revealed that CFO Sarah Friar has serious disagreements with Sam. Friar believes OpenAI isn’t ready to go public this year, but Sam wants to push the IPO. The strangest part is that Friar no longer reports directly to Sam but to another executive who is on medical leave. A company preparing for an IPO with that internal structure.
Internal Microsoft executives can’t handle this anymore. They say Sam distorts facts, breaks promises, and constantly revokes agreements. One even compared him to Bernie Madoff or SBF.
An ex-board member described him like this: Sam has an extremely rare combination of qualities. In every face-to-face interaction, he has a strong desire to please, but simultaneously shows an almost sociopathic indifference to the consequences of deceiving others. For a salesperson, it’s the most perfect gift.
What worries me is that OpenAI isn’t a typical tech company. According to their own words, they are possibly developing the most powerful technology in human history. They can reconfigure the global economy, labor markets, and according to their own document, it can also be used to create biochemical weapons or cyberattacks. All security safeguards have been left in the dust. The nonprofit mission has given way to the race for an IPO.
The former chief scientist and the former head of security consider the CEO untrustworthy. Do we really feel safe leaving it up to one person to unilaterally decide when to release AI models that could change the course of humanity?
OpenAI’s response was concise: that the article reiterates events already reported using anonymous claims and selective anecdotes. Sam did not respond to specific accusations, did not deny the memo’s authenticity, only questioned the motives.
Ten years of OpenAI summarized: a group of idealists concerned about AI risks created a nonprofit organization. They achieved extraordinary advances. Capital arrived massively. The mission began to shift. The security team was disbanded. Critics were eliminated. The nonprofit structure transformed into a for-profit entity. The board that once could shut down the company is now full of CEO allies. The company that committed to dedicating 20% of computing power to safety now has public relations saying that doesn’t exist.
More than a hundred eyewitnesses have given the same label: it’s not subject to the truth. And it’s about to take this company to an IPO valued at over $850 billion.