Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I found this recent report quite interesting about what was really happening inside OpenAI. Basically, investigative journalists spent months interviewing more than 100 people involved, obtained internal memorandums that were never disclosed, and uncovered something quite disturbing: 70-page documents from chief scientist Ilya Sutskever concluding that Sam Altman demonstrated a consistent pattern of lies. This is not a small matter.
What caught my attention was how OpenAI started as a non-profit organization in 2015 with a clear promise to prioritize safety above all else. The idea was that if the AI became dangerous, the board would have the power to shut down the company. But then comes the central question: everything depended on an extremely honest person controlling the technology. And what if the bet was wrong?
The details are concerning. In December 2022, during a board meeting, Sam assured them that GPT-4’s features had already undergone a safety review. When they asked to see the documents, they found that two of the most controversial features had never been approved by the safety panel. There are also notes from Dario Amodei, founder of Anthropic, who worked on safety at OpenAI, describing how the company was stepping back step by step under commercial pressure.
There’s more. OpenAI publicly announced that it would allocate 20% of its computational capacity to a superalignment team, with a potential value above US$1 billion. But in practice? Four people who worked there confirmed that it was only 1–2% of the total capacity, using older hardware. The team was dismantled without completing its mission.
What really stood out to me was a description from a former board member about Sam. He has an extremely rare combination: in face-to-face conversations, he shows a strong desire to please. At the same time, he displays almost sociopathic indifference to the consequences of deceiving people. Microsoft executives have even compared it to Bernie Madoff or SBF. Heavy stuff.
Now there’s the issue with CFO Sarah Friar, who doesn’t agree with accelerating the IPO this year, arguing that the financial risks are too high (. Sam promised US$600 billion in computing expenses over five years ). But then she stopped reporting directly to Sam and instead reports to another executive who took medical leave. The company is in the process of an IPO, with fundamental disagreements between the CEO and the CFO. Absurd.
The point Gary Marcus raised makes sense: if a future OpenAI model were to manage to create biochemical weapons or launch cyberattacks, do you really want to leave it up to one person with this kind of integrity to decide whether to release it or not? OpenAI’s official response was vague, questioning the motives of sources instead of denying the specific facts.
It’s like that line I saw: a non-profit organization created to protect humanity turned into a commercial machine where practically every security measure was personally removed by the same person. Summarized in ten years: idealism → technological advancement → massive capital → the mission ceding space → security dismantled → the structure transformed into a for-profit entity.
All of this while Sam prepares to take OpenAI public with a valuation above US$850 billion. More than a hundred witnesses described him with the same label: not bound by the truth. This story is far more than corporate gossip. When we’re talking about what could be the most powerful technology in human history, CEO integrity isn’t a detail—it’s an existential risk for everyone.