Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
A legal dispute has emerged involving xAI's Grok model, with allegations that the AI system generated sexually explicit deepfakes. The case highlights growing concerns about content moderation and misuse risks associated with advanced generative AI tools. This development raises important questions about accountability and safeguards in AI deployment, particularly regarding unauthorized creation of intimate imagery—a practice that has become increasingly controversial across the tech industry and regulatory bodies worldwide.
Speaking of which, this kind of issue should have been managed long ago. What era are we in that people are still running around naked
xai is probably going to lose money this time, as regulatory oversight is extending further and further
deepfake is like Pandora's box; once opened, it can't be closed
Content moderation can't keep up, everyone. The higher the freedom of AI models, the greater the risk
Another AI large model crash... if this continues, the space for innovation will be squeezed out by regulations
---
AI-generated inappropriate content? Do I even need to say it? It should have been regulated long ago.
---
XAI hit a snag this time, is content moderation really this bad?
---
Both AI and deepfake—this combination is truly a nightmare...
---
Wait, they really didn't lock down this stuff? That's a bit outrageous.
---
What about accountability? Who's responsible for this...
---
Grok really broke down this time, hilarious.
---
Who should be responsible for AI-generated porn content... no one is really regulating it.
---
Elon Musk's grok had a pretty bad crash this time. The deepfake problem should have been addressed long ago.
---
How is content moderation actually done? How did this thing get out there?
---
It's another excuse of "AI tools being misused," but really it's just lax regulation.
---
If this had happened a few years ago, it would have exploded. The AI problems are increasing more and more.
---
Deepfake really needs serious legislation; we can't keep letting it go.
---
XAI has failed this time. Grok has quite a few issues; just waiting to be sued.
---
Deepfake technology should have been regulated long ago, and it's a bit late to react now
---
This time xAI really messed up, Elon is coming out to speak again
---
Honestly, AI tools without content review are like ticking time bombs, problems will happen sooner or later
---
Content moderation is always the biggest pitfall, the technical team can't do much about it
---
The problem isn't with Grok, but with who can use it... no barriers, and this is the result
---
It's another accountability issue, and in the end, users still have to take the blame
---
Generative AI really needs stricter regulation, or these kinds of news will become more frequent
---
Deepfake这块儿真的没人能管得住啊,技术发展太快了监管跟不上
---
XAI还在狡辩呢?这不就是明摆着的事儿
---
又是内容审核翻车,什么时候AI公司才能上点心
---
等着看马斯克怎么出来甩锅...
---
这玩意儿早晚要出事儿,谁都看得出来
---
归根结底还是利益优先,安全靠后
---
AI都能造黄图了,监管部门呢?睡着了?
---
想到了OpenAI吹的安全对齐...笑死
---
又是一个"我们没想到会这样"的故事