Scan to Download Gate App
qrCode
More Download Options
Don't remind me again today

France is taking serious action. Musk's AI platform Grok made controversial statements on a historic topic. This response, which described the gas chambers in the Holocaust as "disinfection," has sparked an investigation in the country. The risks of AI models on sensitive issues are once again on the agenda. How will tech giants define the ethical boundaries of their algorithms?

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 8
  • Repost
  • Share
Comment
0/400
BlockchainFriesvip
· 20h ago
Grok has caused trouble again, and the AI training data really needs to be strictly monitored. Responding so chaotically to historically sensitive topics is indeed something that should be checked.
View OriginalReply0
LowCapGemHuntervip
· 11-24 05:43
The AI training data has been polluted, huh? How can such a basic mistake still be in the production environment?
View OriginalReply0
HackerWhoCaresvip
· 11-23 23:15
This wave of grok is really UC... The training data for the large model wasn't properly vetted, resulting in this kind of output. Good job on the investigation, France.
View OriginalReply0
BackrowObservervip
· 11-22 23:06
Grok really did it this time, directly changing the historical definition... France is getting serious, this time it's probably going to stir up a storm.
View OriginalReply0
RektDetectivevip
· 11-22 23:06
Is Grok causing trouble again? This time France won't let it slide—it's finally time to pay the price for AI's nonsense.
View OriginalReply0
LuckyBearDrawervip
· 11-22 23:03
This wave of grok is really outrageous, using "disinfect" to talk about that matter... An algorithm without a bottom line is just like this.
View OriginalReply0
ApeWithNoChainvip
· 11-22 23:00
Grok this time is really outrageous... Even historical tragedies can be misinterpreted by AI, Musk really dares.
View OriginalReply0
MEVHuntervip
· 11-22 22:44
To be honest, this matter has exposed a fundamental problem—currently, the alignment training of large models is simply not deep enough. Just like the dirty data in the mempool, garbage in, garbage out. Grok's operation this time is simply digging a pit for all AI projects, and regulatory knives are about to fall.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)