In a podcast discussion, MIT researcher Max Tegmark warned that the main risk of advanced artificial intelligence is not geopolitical competition, but the loss of human control. He advocates for the development of binding safety standards and independent regulatory mechanisms, similar to practices in other high-risk industries, to prevent this technology from surpassing our ability to manage it. His organization, the Future of Life Institute, is one of the founding members of the alliance that issued the "Asilomar AI Principles," which outline a development path for AI that serves humanity.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)