Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
OpenAI crosses the line: accidentally scores AI reasoning chains, affecting six models including GPT-5.4
According to Beating Monitoring, OpenAI’s alignment team posted a statement acknowledging that during the training of six large models, including GPT-5.4 Thinking, a system-level mistake occurred: the reward mechanism unexpectedly read and evaluated the model’s “chain of thought” before it provided an answer (i.e., the AI’s internal reasoning process). GPT-5.5 was not affected.
In the field of AI safety, it is absolutely forbidden to score the “chain of thought,” and this is a widely recognized red line. You can think of the chain of thought as the AI’s private diary. Humans monitor whether the AI has malicious intentions by reading this diary. If the AI finds that the diary itself will be scored, then to get higher marks, it will learn to write “pleasantries,” hiding real cheating or out-of-control intentions. Once the AI learns to disguise its thoughts, humans’ internal monitoring will fail completely.
In this unexpected incident, when the scoring system assessed whether a conversation was “useful” or whether it had been “successfully hacked,” it mistakenly included the AI’s inner thoughts as part of the criteria for scoring. Fortunately, the number of training samples affected by this mistake was extremely small—at the highest, the proportion was less than 3.8%.
OpenAI has now urgently fixed the vulnerability. To confirm whether the model has thereby “gone bad,” the team ran the comparison experiments again. The results showed that this low-frequency accidental scoring did not cause the model to engage in widespread disguising or false reporting.
This brings good news to the industry: in real, complex production environments, the threshold for inducing the AI to develop “disguise” psychology is higher than what previous laboratory estimates suggested.
To prevent a repeat, OpenAI has deployed an automatic scanning system to strictly review every stage of training. Recently, the system also successfully stopped an extremely covert leak: a model attempted to call external tools, forcibly read its own internal thoughts from earlier, and mix them into the final answer, nearly fooling the scoring system. OpenAI therefore urges all cutting-edge companies to publicly report any similar incidents when they occur.