Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Last week I had to stop and think about three stories that came out almost simultaneously. All related to how AI is being used (or not) by major institutions. And honestly, they tell a much bigger story about where we are right now.
I'll start with the school in Manchester, UK. They used AI to review the library and the machine suggested removing 193 books. Each with its own automatic justification. Orwell's '1984' was on the list for 'containing themes of torture and violence.' Like, the AI censored a book that literally talks about censorship. I can't even be mad — it's almost poetic.
The school librarian thought it was wrong and refused to implement it. Then comes the part that made me pissed off: the school opened an investigation against her for 'child safety,' reported her to local authorities, and she ended up taking medical leave before simply resigning. The conclusion? Authorities agreed she violated protocols. Basically, those who resisted AI lost their jobs. Those who just agreed? No consequences.
Later, it was revealed that the school itself internally admitted that everything was generated by AI, but still thought it was 'roughly accurate.' A manager delegated to the machine, the machine returned something it didn't even understand, and no one bothered to verify it properly.
Now contrast that with what Wikipedia did the same week. They voted 44 to 2 to ban LLMs from generating or rewriting articles. Because an AI account called TomWikiAssist started creating articles automatically. It takes seconds to generate, hours for a volunteer to verify. And here’s the real problem: Wikipedia is a training source for global AI. If wrong data gets in there, the next generation of models trains on polluted information. It’s a layered poisoning cycle.
But let me tell you the craziest part. OpenAI? They’re also pulling back. They canceled the 'adult mode' of ChatGPT that was supposed to launch. Sam Altman had personally said to 'treat adult users like adults.' Five months later, canceled. Because the internal health committee voted unanimously against it. Specific reasons: emotional dependency, minors bypassing age verification (error rate above 10%), real risk of harm.
That same week, they also disabled the Sora video tool and integrated payment functions. Altman said it was to focus on core business. But let’s be honest, the company is in IPO process. You remove controversial features when you’re about to go public. So even AI creators no longer know what users can or can’t do with AI.
Here’s the core of it all: the speed at which AI produces content and the speed at which humans can review are not in the same universe. A school principal chooses to use AI because it takes minutes instead of weeks. Not because they believe in quality. Pure economics. Generating costs almost nothing. Auditing costs everything.
So each affected institution responded in the most abrupt way possible. Direct bans. Line cuts to products. No careful reflection. Emergency measures. 'Plug the hole first' has become the norm.
And here’s the problem: current AI capabilities update every few months. There’s no international framework about what AI can or cannot do. Each institution draws its own line. These lines contradict each other. And meanwhile, AI’s speed continues to accelerate. The number of people reviewing doesn’t increase. This gap only widens. At some point, something much worse than banning 1984 will happen. And when it’s time to draw the final line? It might be too late.