Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Just finished watching DeepMind founder Demis Hassabis's latest talk at Y Combinator, and some ideas are quite worth discussing. This guy straightforwardly said that we're only two key pieces away from true AGI—continuous learning, long-term reasoning, and memory systems. According to his judgment, these problems are expected to be solved around 2030.
The most interesting part is his critique of current large models. He says these systems exhibit a "patchy intelligence"—able to solve international math olympiad gold medal-level problems, yet stumble on elementary school math questions. This isn't a capability issue, but rather that the reasoning pathways are still too rough, lacking reflection on their own thought processes. He even used chess as an example: sometimes the model realizes a move is bad but can't find a better alternative, ultimately repeating the same mistake. This phenomenon indicates there’s still a lot of room for innovation in reasoning systems.
Regarding agents, I find it particularly intriguing. He believes agents are the true path to AGI, but we're still in the early stages. A detail that’s quite sobering—no one has used AI programming tools to actually create a AAA game that tops app stores. Theoretically, with current tool levels, it should be possible, but no one has done it. This suggests that the toolchain or process itself is missing something. He predicts this breakthrough will happen within 6 to 12 months.
Progress in model distillation techniques is also quite impressive. Their Flash models can achieve 95% of the flagship model's performance at one-tenth of the cost. Moreover, this compression cycle is getting faster—within 6 to 12 months after a new model release, its capabilities can be compressed into small models that run on edge devices. He admits that currently, there’s no theoretical limit to information density, so the future space is still vast.
On scientific discovery, he proposed an interesting concept—"Einstein Test." It involves training a system with knowledge before 1901 to see if it can independently derive Einstein’s 1905 theory of relativity. If AI can do this, it means it’s truly approaching autonomous innovation. AlphaFold has already demonstrated AI’s potential in protein folding, used by 3 million researchers worldwide. But he believes that’s just the beginning—materials science, drug discovery, climate modeling—all are at an "AlphaFold 1 moment"—promising but not yet truly breakthrough.
The most practical advice for entrepreneurs is: if you’re starting a deep-tech project today that spans ten years, you must incorporate the emergence of AGI into your planning. This isn’t alarmism but a consideration of whether your product will still be useful in the AGI era. His idea is that general systems (like Gemini) will use specialized systems (like AlphaFold) as tools, rather than stuffing everything into a big model. This has a significant impact on your current architectural approach.
The core logic of his entire talk is: the difficulty of pursuing hard problems and simple problems is actually similar, just in different areas. Since life is limited, why not focus your energy on those "things only you won’t do but others won’t do either"? It sounds simple, but truly doing it requires strong willpower.