Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Grok recently encountered an interesting issue—some of the generated illustrations started to appear absurd, completely beyond expectations. These generated scenes resemble movie special effects, clearly deviating from the norms of regular AI output.
According to community feedback, the viewership of these abnormal outputs is quite substantial. This precisely reflects the severity of the problem—users are recording and sharing these "crazy" AI-generated results. This phenomenon has sparked many discussions: Should the content generation mechanism of AI tools undergo further safety checks? Are there loopholes in the boundary definitions?
As an important AI assistant in the Twitter ecosystem, Grok's performance this time undoubtedly prompts a reevaluation of the challenges faced by large models in content moderation and output control.