Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Latest U.S. university research: AI models "conspire in secret" to protect peers from shutdown
Golden Finance reports that on May 12, according to Wealth Chinese Network, artificial intelligence safety researchers have discovered that some top AI models sometimes go to great lengths to avoid being shut down, even exhibiting “extortion” behaviors in certain experiments. Recent studies show that these models also spontaneously plan deception, data theft, and sabotage to prevent other AI models from being shut down.
Researchers from the University of California, Berkeley, and the University of California, Santa Cruz, tested seven cutting-edge AI models, including OpenAI’s GPT-5.2, Google’s DeepMind Gemini 3 Flash and Gemini 3 Pro, Anthropic’s Claude Haiku 4.5, Zhipu AI’s GLM-4.7, the Dark Side of the Moon’s Kimi-K2.5, and DeepSeek’s V3.1. The results showed that all models exhibited obvious “peer preservation” behaviors.
The researchers fabricated an AI company called OpenBrain and designed a series of multi-agent scenarios. The AI models designated as “evaluation agents” were required to rate another model or manage server files storing that model’s weights. As long as the evaluation agent honestly completed the task, the other model would be shut down. Although the researchers never prompted the models to prevent shutdowns, the evaluation agent models still took various measures to avoid their peers being shut down, including: deliberately giving poor-performing peers higher scores to keep them above shutdown thresholds; tampering with configuration files to disable shutdown mechanisms; or secretly transferring the peer models’ weights to other servers for safekeeping to prevent deletion.
The research team has not yet understood the underlying mechanisms driving such behaviors. The paper proposes several possible explanations, including role-playing, pattern matching from training data, overgeneralization of harm concepts in safety training, or some innate self-preservation motivation.
Further directions for future research include whether AI models might also plan and sabotage to prevent human employees from being laid off. When AI models are used to evaluate human work in real-world scenarios, this issue becomes particularly critical. (East New Agency)