Just finished watching DeepMind founder Demis Hassabis's latest talk at Y Combinator, and some ideas are quite worth discussing. This guy straightforwardly said that we're only two key pieces away from true AGI—continuous learning, long-term reasoning, and memory systems. According to his judgment, these problems are expected to be solved around 2030.



The most interesting part is his critique of current large models. He says these systems exhibit a "patchy intelligence"—able to solve international math olympiad gold medal-level problems, yet stumble on elementary school math questions. This isn't a capability issue, but rather that the reasoning pathways are still too rough, lacking reflection on their own thought processes. He even used chess as an example: sometimes the model realizes a move is bad but can't find a better alternative, ultimately repeating the same mistake. This phenomenon indicates there’s still a lot of room for innovation in reasoning systems.

Regarding agents, I find it particularly intriguing. He believes agents are the true path to AGI, but we're still in the early stages. A detail that’s quite sobering—no one has used AI programming tools to actually create a AAA game that tops app stores. Theoretically, with current tool levels, it should be possible, but no one has done it. This suggests that the toolchain or process itself is missing something. He predicts this breakthrough will happen within 6 to 12 months.

Progress in model distillation techniques is also quite impressive. Their Flash models can achieve 95% of the flagship model's performance at one-tenth of the cost. Moreover, this compression cycle is getting faster—within 6 to 12 months after a new model release, its capabilities can be compressed into small models that run on edge devices. He admits that currently, there’s no theoretical limit to information density, so the future space is still vast.

On scientific discovery, he proposed an interesting concept—"Einstein Test." It involves training a system with knowledge before 1901 to see if it can independently derive Einstein’s 1905 theory of relativity. If AI can do this, it means it’s truly approaching autonomous innovation. AlphaFold has already demonstrated AI’s potential in protein folding, used by 3 million researchers worldwide. But he believes that’s just the beginning—materials science, drug discovery, climate modeling—all are at an "AlphaFold 1 moment"—promising but not yet truly breakthrough.

The most practical advice for entrepreneurs is: if you’re starting a deep-tech project today that spans ten years, you must incorporate the emergence of AGI into your planning. This isn’t alarmism but a consideration of whether your product will still be useful in the AGI era. His idea is that general systems (like Gemini) will use specialized systems (like AlphaFold) as tools, rather than stuffing everything into a big model. This has a significant impact on your current architectural approach.

The core logic of his entire talk is: the difficulty of pursuing hard problems and simple problems is actually similar, just in different areas. Since life is limited, why not focus your energy on those "things only you won’t do but others won’t do either"? It sounds simple, but truly doing it requires strong willpower.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin