Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Elon Musk sues OpenAI for stealing charity while himself stealing from OpenAI
May 4 Court Hearing: The Self-Defeating Bombshell That Reflects the Entire AI Era.
Elon Musk sues OpenAI for stealing charity, while he himself is stealing from OpenAI
AKASHA · 2026.05.05
On May 4, in Oakland Federal Court, California. Week two of the Musk vs. Altman trial.
Musk threw a self-defeating bomb in court — he admitted that xAI “to some extent” distilled OpenAI’s models, meaning: using OpenAI model outputs to train xAI’s own models.
But his original purpose for coming here was to sue Sam Altman for “stealing a charity” — turning OpenAI, a nonprofit organization meant to “benefit humanity,” into a profit-making machine valued at $852 billion.
Claiming to accuse others of “stealing,” he is also “stealing” himself.
This is not just an awkward situation for Musk. It’s an awkward situation for the entire AI era.
If you think only Musk is this self-defeating, look at this complete “mutual accusation chain”:
OpenAI accuses others of stealing
Early 2025, OpenAI publicly stated that “there are signs” that DeepSeek distilled GPT models.
OpenAI is also accused of theft
In December 2023, The New York Times sued OpenAI and Microsoft in Manhattan Federal Court, alleging unauthorized use of millions of NYT articles to train GPT. Reddit and Anthropic have also been defendants.
Musk accuses Altman of stealing
And now, Musk is personally suing Altman for “stealing OpenAI, this charity.”
Musk himself is also stealing
But he admitted in the same court hearing that xAI used OpenAI model outputs to train its own models.
From the model layer to the data layer, from bottom to top — everyone is accusing others of stealing, everyone is being accused of stealing.
FACT OF THE ERA
There is no “innocence” in the AI era.
Strangely, despite the industry knowing about mutual distillation and mutual lawsuits, models keep improving, and valuations keep soaring. OpenAI at $852 billion, Anthropic at $900 billion, and xAI is chasing.
Why?
Because the “property rights” in the AI era are fundamentally different from those in the industrial era — the underlying physics are not the same.
Industrial Era Property Rights
Based on physical atoms. A piece of iron, a building, a machine — if you take it away, it’s gone. Patents and laws protect “exclusivity.”
AI Era Property Rights
Based on bits. A set of model weights, a single inference output, a training dataset — can be copied infinitely without loss. “Exclusivity” does not physically exist.
The “intellectual property law” of the industrial era was designed for limited media. But the carrier of the AI era is bits — bits are not diminished by copying.
Not everyone is shameless; the rules just can’t keep up with physics.
Will the verdict in Musk vs. Altman solve the problem?
No.
Even if Musk wins — will it change the fact that xAI distilled OpenAI? No.
Even if Altman wins — will it change the fact that OpenAI trained on NYT data? No.
Laws always chase technology, not define it. When Napster was judged illegal, MP3 copying was already irreversible. The ruling could bankrupt Napster but couldn’t make MP3 files on 100 million PCs disappear.
Court rulings are tools of the industrial age — they only regulate “things that can be monopolized.” The core material of the AI era is bits — bits cannot be monopolized, so rulings are inherently ineffective.
— THE NEW RULE —
The next-generation rules,
are not judged out — they are established through protocols.
Every media generation has fought the same copyright wars. The winners are never lawyers, but protocols.
Music
2000s: Napster was sued into bankruptcy → MP3 copying became irreversible → iTunes introduced a “$0.99 per song” paid protocol → Spotify used streaming with ad revenue sharing, making “pay once, listen once” automatic. Today, no one blames Spotify for stealing music because copyright flow has been protocolized.
Video
2005s: YouTube was sued to the brink of bankruptcy → Introduced Content ID, automatically recognizing copyrighted content and sharing ad revenue with rights holders. Since then, YouTube has been both a “hotbed of piracy” and “the largest copyright payor.”
Photos / Documents
Creative Commons: authors declare “attribution for commercial use / attribution for non-commercial use / no derivatives” via protocol code, turning the protocol itself into the operational rule of copyright — no need for lawsuits every time.
Each answer is not about “stricter laws” — it’s about protocolizing copyright flow. Enabling creators, users, and platforms to exchange value automatically through protocols, not lawsuits.
The “Spotify moment” in the AI era has not yet arrived.
— OpenAI trains on NYT data, pays NYT licensing fees — negotiated by lawyers, paying each time.
— xAI distills OpenAI outputs — no one pays because there’s no protocol layer for “distillation detection + automatic settlement.”
— DeepSeek distills ChatGPT — OpenAI can only protest verbally, unable to trace automatically.
What is needed? A protocol layer — allowing every inference, every data use, every model distillation in the AI era to be as detectable, settleable, and traceable as every play on Spotify: automatically detected, automatically settled, automatically traced.
This is the position Hetu is advocating — not siding with Musk, not with Altman, but establishing a protocol that enables bits to flow.
The next winner in the AI era is the one who establishes a “flow protocol” for everyone.
Musk sues Altman for stealing charity, while he himself is stealing from OpenAI.
Altman counters with evidence from Musk’s SMS messages, while he trains GPT on NYT data himself.
No one is innocent, but everyone is running.
This war will not end with court rulings.
It will wait for the day when the “Spotify of AI” appears.