Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Karpathy 4/30 at Sequoia Ascent compresses this year's most useful AI explanations into three key points. After reading, your perspective on AI will change.
1. AI is not just "faster," it’s a new paradigm
In the past two years, everyone has been talking about AI making things faster.
Karpathy says this is a misinterpretation.
Here are three examples of AI redefining tasks:
- menugen: image input and output, no traditional code, the entire app is swallowed by LLM
- .md skills: installing software without writing .sh scripts, just write a Chinese/English explanation, letting the LLM understand your environment to install
- LLM knowledge base: tasks that traditional code cannot do—turning any format of unstructured text into computable knowledge
The first category is "reducing code," the second is "using English as code,"
the third is "tasks that traditional code simply cannot do."
2. Jagged Edge — Why AI is both versatile and stupid at the same time
The core argument.
Why can the same AI refactor 100k lines of code,
yet suggest you go wash your car? It’s not a model glitch.
Karpathy’s exact words:
"You're either on the rails of the RL circuits and flying,
or off-roading in the jungle with a machete."
Either you’re flying within the trained RL circle,
or you’re hacking through the jungle with a machete.
Two factors determine which tasks are within the training distribution:
verifiability (results can be verified) + economics (is it worth the frontier labs’ money to invest in RL)
Math competitions / programming / theorem proving:
High verifiability + high TAM → within the circle → when you use it, you’re flying
Everyday advice / obscure languages and literature / long-tail tasks:
Low TAM → not in RL → you’re hacking through the jungle with a machete
It’s not a linear story of "AI is getting stronger."
It’s about uneven boundaries—you must know which side you’re on.
3. Agent-native economy
The final point: future software will decompose into
sensor (input) + actuator (execution) + logic (reasoning)
The logic layer runs entirely on LLM,
while sensor / actuator use traditional code as co-processors.
Implication: making information as readable as possible for the LLM
is the core constraint of upcoming software design.
---
These three points form a coherent framework:
The new paradigm shows you what AI can do that was impossible before,
the jagged edge helps you identify where AI still falls short,
and agent-native reveals how to wrap the remaining tasks for AI.
It’s not about "AI getting stronger."
It’s about "which tasks are in the circle, which are in the jungle."