Just caught the Y Combinator podcast with Demis Hassabis, and honestly, some of his takes on AGI and what's actually missing from current models hit different. The guy's been thinking about this longer than almost anyone, and what's wild is how grounded his perspective is—not hype, just practical assessment.



So here's the thing that stuck with me: we already have most of the pieces. Large-scale pre-training, RLHF, chain-of-thought reasoning—these are almost certainly going to be part of the final AGI architecture. But there are maybe one or two critical gaps left. Continuous learning, long-term reasoning, memory systems that don't just cram everything into context windows like we're using duct tape. His take? Around 2030 for AGI, and honestly, that changes how you should think about building things today.

What really got me was his observation on the current state of reasoning. Models can solve IMO gold medal problems but fail at elementary math depending on how you phrase it. There's this jagged intelligence problem—the system lacks something in introspection about its own thinking process. It's like watching Gemini play chess, realizing a move is bad, but then making it anyway because it can't reason its way to a better option. That shouldn't happen in a precise reasoning system.

On agents, he's clear: we're just getting started. Everyone's hyping agents, but the real work is making them genuinely useful, not just demos. He mentioned something interesting—nobody's created a top-charting AAA game using AI coding yet. With current tools, theoretically possible, but something's still missing in the process or the tools themselves. He expects to see that shift in 6-12 months.

The distillation angle is fascinating too. Their hypothesis is that within 6-12 months of releasing a cutting-edge model, they can compress its capabilities into something that runs on edge devices. Flash models hitting 95% of frontier performance at a tenth the cost. And here's the kicker—they haven't hit any theoretical limit on information density yet. That's huge for what's possible with smaller models.

On the scientific breakthrough side, he talked about what he calls the "Einstein test." Can you train a system on knowledge up to 1901 and have it independently derive special relativity? Once that works, these systems are close to actual invention, not just pattern matching. AlphaFold was the prototype—now it's standard in drug discovery. But we're still in the early phase for most fields.

The advice for founders at Y Combinator was sharp: pursue problems that only you can solve. If you're starting a deep tech project today, you need to factor AGI into your planning. A ten-year project might hit AGI midway through. Don't build something that becomes obsolete; build something that remains valuable in an AGI world. Think about how specialized systems like AlphaFold will integrate with general-purpose models as tools, not everything crammed into one massive model.

One last thing that resonated—he talked about cross-disciplinary work becoming easier with AI, and how we need to stop thinking about everything as one unified brain. Specialized tools will coexist with general systems. That's probably the framework worth thinking about if you're building anything today.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin