Many people now discuss AGI and first ask a question: How far are we from AGI?


But I increasingly feel that a more important question is whether humans still have the ability to actively decide if AGI will arrive.
My answer is actually somewhat pessimistic, because all technologies that have truly changed the structure of civilization in the past, once feasible, will eventually be pushed forward.
Nuclear energy is like this, the internet is like this, mobile internet is like this, and AI is very likely the same.
So I don't fully agree with traditional technological optimism; I am more of a technology irreversibleist.
In other words, I believe AGI may not necessarily bring utopia, but it will almost certainly arrive, and probably faster than most people imagine.
Many people still understand AGI as a chatbot smarter than humans, but I think by 2026, continuing to discuss the definition of AGI itself is becoming less and less meaningful.
Because what truly matters is not whether it reaches philosophical-level general intelligence, but whether it begins to impact the real economic structure.
In fact, this has already happened; AI has started to replace some basic mental labor.
It has begun to reshape search, influence software development, and change educational methods.
I think many people are actually underestimating the positive impact of AGI, as well as its risks.
The most underestimated positive aspect is the democratization of cognitive resources—throughout history, the top knowledge, analytical ability, and research capacity have always been privileges of a few.
But AI is gradually democratizing these things, giving ordinary people the first chance to access near-professional-level cognitive assistance systems.
However, the risks are equally huge, because AGI is likely inherently inclined toward power concentration.
Training the most advanced models requires massive capital, computing power, and data, which means that the most powerful AI systems in the future may be held in the hands of very few organizations for a long time.
And this is also why I believe that crypto and decentralization will become increasingly important in the AGI era.
Because if the most powerful cognitive systems in the future are controlled by a few companies, then open finance, open identity, and open computing will become important balancing forces.
Especially stablecoins, on-chain identities, and decentralized computing networks—I think they will gradually become the open layer in the AI world.
As for how individuals should prepare, my own answer is quite simple: don’t just learn tools, learn judgment.
Because in the future, the most scarce resource may no longer be knowledge itself, but how to ask the right questions, how to filter information, and how to build independent cognition.
My personal framework has always been cautious optimism; I don’t think AGI will destroy the world, but I also don’t believe it will automatically create a fair world—technology will only amplify existing human structures.
Actually, I’ve been pondering a question: if in the future everyone has access to nearly unlimited intelligent assistance systems, what will truly remain as the difference between people?
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned