Karpathy 4/30 at Sequoia Ascent compresses this year's most useful AI explanations into three key points. After reading, your perspective on AI will change.


1. AI is not just "faster," it’s a new paradigm
In the past two years, everyone has been talking about AI making things faster.
Karpathy says this is a misinterpretation.
Here are three examples of AI redefining tasks:
- menugen: image input and output, no traditional code, the entire app is swallowed by LLM
- .md skills: installing software without writing .sh scripts, just write a Chinese/English explanation, letting the LLM understand your environment to install
- LLM knowledge base: tasks that traditional code cannot do—turning any format of unstructured text into computable knowledge
The first category is "reducing code," the second is "using English as code,"
the third is "tasks that traditional code simply cannot do."
2. Jagged Edge — Why AI is both versatile and stupid at the same time
The core argument.
Why can the same AI refactor 100k lines of code,
yet suggest you go wash your car? It’s not a model glitch.
Karpathy’s exact words:
"You're either on the rails of the RL circuits and flying,
or off-roading in the jungle with a machete."
Either you’re flying within the trained RL circle,
or you’re hacking through the jungle with a machete.
Two factors determine which tasks are within the training distribution:
verifiability (results can be verified) + economics (is it worth the frontier labs’ money to invest in RL)
Math competitions / programming / theorem proving:
High verifiability + high TAM → within the circle → when you use it, you’re flying
Everyday advice / obscure languages and literature / long-tail tasks:
Low TAM → not in RL → you’re hacking through the jungle with a machete
It’s not a linear story of "AI is getting stronger."
It’s about uneven boundaries—you must know which side you’re on.
3. Agent-native economy
The final point: future software will decompose into
sensor (input) + actuator (execution) + logic (reasoning)
The logic layer runs entirely on LLM,
while sensor / actuator use traditional code as co-processors.
Implication: making information as readable as possible for the LLM
is the core constraint of upcoming software design.
---
These three points form a coherent framework:
The new paradigm shows you what AI can do that was impossible before,
the jagged edge helps you identify where AI still falls short,
and agent-native reveals how to wrap the remaining tasks for AI.
It’s not about "AI getting stronger."
It’s about "which tasks are in the circle, which are in the jungle."
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin