Superintelligent AI could become a disaster — Vitalik Buterin

AI war# Superintelligent AI could be a disaster — Vitalik Buterin

Humanity does not cope well enough with aligning global values to ensure that superintelligent artificial intelligence will work for the benefit of all people. This statement was made by Ethereum co-founder Vitalik Buterin.

He commented on the post of the 3D artist with the nickname Utah teapot.

How is AI alignment even a thing? Everyone is always asking "aligned to what" and no one ever has an answer other than "human values". Coherent, self-consistent human values are not a thing. I still don't get it. What are we researching? It's ridiculous that this is a growing…

— Utah teapot 🫖 (@SkyeSharkie) August 24, 2025

"How can the concept of AI alignment even exist? I always ask, 'alignment with what?'. No one ever answers anything other than 'with human values'. But there are no coherent, non-contradictory human values. I still don't understand. What are we even researching? It's absurd that this is turning into a growing industry. Make-work," he wrote.

Buterin emphasized that there are many things in the world that contradict ethical principles. As an example, he cited murder or the imprisonment of the innocent.

"[…] We are not yet good enough at alignment to guarantee that a single superintelligent AI will avoid even this," the developer added.

The Utah teapot noted that the development of aligned artificial intelligence is moving towards restricting public access to development. At the same time, large companies are selling versions of their AI systems that contribute to negative phenomena and act as instruments of war.

"I am deeply concerned that the centralization of access to AI technologies allows for the imposition of what is not actually security issues — for example, discussions around 'psychosis' from LLM. This poses a risk of harming unique or marginalized cultures and their values," noted the user.

Buterin shares many of these concerns.

"I think the greatest risks will come from military and other organizations that have significant power, allowing them to exempt themselves from the security rules that apply by default to everyone else," added Ethereum co-founder.

The Utah teapot was cited as an example by the AI startup Anthropic. It develops alternative versions of "civilian" models and provides them to governments. They can be used for military or intelligence operations.

Buterin emphasized that the likelihood of a catastrophic scenario for humanity due to AI increases if there is only one superintelligent agent-based artificial intelligence with its own will and the ability to act as a subject.

If the environment is pluralistic, no single system will be able to totally control the situation. However, merely having market mechanisms is not enough to create it. Deliberate efforts are necessary, including changes in laws and incentives that large corporations will not like.

Let us remind you that in 2022, India’s Minister of Information Technology, Rajeev Chandrasekhar, called for the development of global standards to ensure the safety of AI for humans.

ETH0.35%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)