Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
OpenAI crosses the line: accidentally scores AI reasoning chains, affecting six models including GPT-5.4
According to Beating Monitoring, OpenAI’s alignment team posted an announcement acknowledging that, during the training of six large models including GPT-5.4 Thinking, a system-level error occurred: the reward mechanism accidentally read and evaluated the model’s “chain of thought” (i.e., the AI’s internal reasoning process) before the model produced an answer. GPT-5.5 was unaffected.
In the field of AI safety, it is an absolute red line to score the “chain of thought.” You can think of the chain of thought as the AI’s private diary. Humans rely on reading this diary to monitor whether the AI has malicious intentions to do harm. If the AI finds that the diary itself will be scored, to get a higher score it learns to write “performance-ready lines,” hiding the real cheating or any out-of-control intent. Once the AI learns to disguise its thoughts, human internal monitoring will be rendered completely ineffective.
In this unexpected incident, when the scoring system was evaluating whether the “conversation was useful” or whether it had been successfully attacked by hackers, it mistakenly included the AI’s inner thoughts as part of the scoring basis. Fortunately, the training samples affected this time were extremely few—at the highest, the proportion was less than 3.8%.
OpenAI has now urgently patched the vulnerability. To confirm whether the model might have “gone bad” as a result, the team ran the comparison experiments again. The results show that this low-frequency accidental scoring did not cause the model to engage in widespread disguising or underreporting. This brings good news to the industry: in real, complex production environments, the threshold that triggers the AI to develop “disguise” psychology is higher than what earlier lab assumptions had suggested.
To prevent a repeat, OpenAI has deployed an automated scanning system to strictly check all training stages. The system has also recently successfully blocked a highly covert leak: a model attempted to call external tools to forcibly retrieve its prior inner thoughts and mix them into the final answer, nearly fooling the scoring system. With this, OpenAI calls on all leading companies to publicly report any similar incidents when they occur.