Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
OpenAI has clarified where the "Goblin" came from: a personality reward signal contaminated the entire training pipeline.
AIMPACT message. April 30 (UTC+8). According to Beating Monitoring, OpenAI published a retrospective on the “goblin” problem that has plagued multiple generations of GPT models. Starting with GPT-5.1, the models increasingly like to cram fantasy-creature metaphors—such as goblins and fairies—into their answers, and user complaints have kept coming. After GPT-5.1 launched, the frequency of the word “goblin” appearing in ChatGPT conversations increased by 175%. By GPT-5.4, the issue had completely exploded.
The root cause is ChatGPT’s “Nerdy” personality customization feature. The system prompt for this personality instructs the model to “dissolve the seriousness in the fun of language,” “acknowledge the strangeness of the world,” and “enjoy it.” During training, the reward signals used to reinforce this personality style gave higher scores to outputs containing fantasy-creature vocabulary. As a result, this bias can be observed in 76.2% of the data. The problem is that the reward signals only take effect under the “Nerdy” personality, but reinforcement learning does not guarantee that the behaviors it learns will stay confined to the trigger conditions. Once the model is rewarded for a certain speaking habit under some condition, that habit spreads to other scenarios through subsequent training.
The diffusion path is clear: the reward signals encouraged goblin-containing outputs, and these outputs appeared in later supervised fine-tuning (SFT) data. As a result, the model became increasingly accustomed to producing such terms, forming a positive feedback loop. In the data, the “Nerdy” personality accounts for only 2.5% of all ChatGPT responses, yet it contributes 66.7% of goblin mentions. In GPT-5.4, the goblin appearance rate under the “Nerdy” personality surged by 3881% compared with GPT-5.2. GPT-5.5 began training before the root cause was identified, and goblins had already made their way into the SFT data.
OpenAI discontinued the “Nerdy” personality in late March, removed the reward signals favoring fantasy creatures, and filtered the training data. For the already launched GPT-5.5, it added suppression instructions into Codex’s developer prompts. OpenAI says this investigation has produced a new set of model behavior auditing tools.
(Source: BlockBeats)