Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Buterin believes that one Grok significantly improves the quality of discussions on X, despite its peculiarities
Ethereum founder Vitalik Buterin recently shared his assessment of the AI assistant Grok, developed for the X platform. In his opinion, the chatbot performs an important social function: it actively debunks users who attempt to manipulate the technology to legitimize their subjective beliefs and biases.
The core issue lies in the unique architecture of the tool. When a user asks Grok a question, they cannot predict how the AI will respond. This creates a situation where people who expected support for their absurd ideas instead receive reasoned criticism. Vitalik provided examples of how such cases unfolded before his eyes — someone activates Grok hoping for some support of their point of view, but instead encounters uncompromising refutation.
Comparing Grok to other initiatives on X, Buterin distinguished it as the most significant step in spreading truthful information since the introduction of the public annotations feature. He characterized the tool as a pure improvement for the platform’s ecosystem.
However, Vitalik did not shy away from criticism. He expressed concern about Grok’s training mechanism — in particular, that the training data would include personal attitudes and preferences of specific developers, including Elon Musk, who is directly involved in creating this AI assistant. This could potentially introduce some bias into the system’s responses.
In summary, Buterin sees a balance between the positive impact of the tool on the culture of discussion and the potential risks of bias during its development.