OpenAI Co-founder Karpathy Interview: LLM is a New Type of Computer, Everything Must Be "Rewritten"

robot
Abstract generation in progress

Original video title: Andrej Karpathy: From Vibe Coding to Agentic Engineering

Original video source: Sequoia Capital
Original compilation: Bao Yilong, Wall Street Insights

OpenAI co-founder Andrej Karpathy said in the latest interview that large language models are being used as a “new kind of computer,” fundamentally reshaping computing architectures across the board.

On April 29, Andrej Karpathy—an AI leader who previously played a key role in developing Tesla Autopilot and who holds an influential position at OpenAI—delivered a deep breakdown at an event hosted by AI Sent on the current technical leap of AI agents and their far-reaching impact on the software and hardware ecosystem.

Karpathy said that since last December, he has started to realize that agent-centric workflows have truly become usable, and this shift marks the substantive arrival of the Software 3.0 era.

He said: Last year, many people’s impression of AI was still stuck on ChatGPT, but you have to reassess it—especially starting from December. Things have undergone a fundamental change.

He also proposed a new concept called “agentic engineering,” to distinguish it from the “vibe coding” he named last year. The former refers to the continuation and acceleration of quality standards in professional software development.

He was blunt that a large amount of existing code and applications “should not exist” under the new paradigm, and that the recruitment processes, development tools, and infrastructure at most organizations are still designed for humans—not for agents.

The dawn of Software 3.0: power handover in underlying computing architecture

The tech industry is standing at a crossroads where quantitative change is turning into qualitative change.

Last December was a critical turning point. Karpathy said that when he faced the latest AI models, he experienced a deep shock:

System-generated code blocks are getting increasingly perfect. I can’t even remember the last time I changed them. I’m just trusting this system more and more… (This makes me) never feel as behind as I do as a programmer.

This kind of impact is a complete overthrow of the computing paradigm. In Karpathy’s view, the market is currently underestimating the depth of this change.

He pointed out that we are saying goodbye to “Software 1.0 (writing code)” and “Software 2.0 (organizing datasets to train neural networks),” and we are officially entering the “Software 3.0” era.

In this new era, large language models themselves are a “new kind of computer.”

He said: Now your programming becomes writing prompts, and the content in the context window is the lever you use to control the large language model acting as an interpreter—so that it performs computations in the digital information space.

What attracts even more attention from the market is his bold prediction about how the underlying hardware architecture will evolve in the future.

At present, neural networks still run in a virtualized form on existing computers, but he believes this host-client relationship will reverse in the future: you can imagine neural networks becoming the main process, while the CPU becomes a kind of co-processor. Neural networks will shoulder the overwhelming majority of the heavy lifting.

This means that “intelligent compute power,” which dominates capital expenditures across the whole market, will further solidify its strategic core position in the future.

Next-generation infrastructure: rebuilding an “agent-native” ecosystem

When execution and coding are taken over by machines, where will human core value and the future form of infrastructure go?

Karpathy said plainly: Everything must be rewritten.

At present, the documentation for various frameworks and libraries across the internet is still “written for humans,” which bothers him immensely.

Karpathy complained: Why do I still have to be told how to do things? I don’t want to do anything. Should I just copy and paste some text for my AI agent?

The biggest market opportunity in the future lies in building “agent-first” infrastructure.

In this world, systems are broken down into “sensors” that perceive the world and “actuators” that modify the world. Data structures need to be highly readable by large language models, and machine agents represent individuals and institutions to interact with each other in the cloud.

In such a highly automated future, the scarcity of human beings’ core capabilities will return to aesthetics, judgment, and the deepest business understanding.

Karpathy cited a line he keeps savoring as a summary: You can outsource your thinking, but you can’t outsource your understanding.

Agentic engineering: a productivity explosion far beyond “10x engineers”

In the dimension of productivity improvement that the market cares about most, Karpathy distinguishes two core concepts: “vibe coding” and “agentic engineering.”

He pointed out that “vibe coding” raises the minimum standard of software development across the entire team, while “agentic engineering” is designed to maintain the maximum quality ceiling of professional software.

“Agentic engineering” is not just about speeding things up. It requires developers to coordinate those AI agents that are “somewhat prone to errors, stochastic, but extremely powerful,” moving at full speed without sacrificing quality.

This will also greatly expand the imaginative space for enterprise output.

**Karpathy pointed out: “**People used to talk about 10x engineers, but 10x is no longer enough to describe the speedup you get. In my view, the output peak of people who perform well in this field is far beyond 10x.

With this kind of productivity explosion, an enterprise’s organizational structure and talent screening logic must be rebuilt.

He recommended that companies abandon traditional algorithm-problem interview formats and instead assess how candidates use multiple AI agents to collaboratively build large projects, and whether they can withstand attacks from other AI agents.

The key focus for AI business deployment

For entrepreneurs and investors who are urgently looking for AI application scenarios to deploy, Karpathy provides a highly practical evaluation framework: verifiability.

Right now, AI capabilities exhibit a very peculiar “sawtooth” pattern.

He gave an example: Today’s most advanced models can both restructure a codebase of 100,000 lines and find zero-day vulnerabilities, yet they tell me that I should walk 50 meters away to a car wash shop to get my car washed. That’s insane.

The reason for this disconnect is that frontier labs (such as OpenAI) have poured massive reinforcement learning resources into areas where results are easy to verify, such as “mathematics” and “code.”

Therefore, as long as you are operating in business scenarios where outcomes are verifiable, AI can unleash enormous power.

Karpathy hinted that there are still many high-value yet not yet prioritized by top labs verifiable reinforcement learning environments in the market—this is precisely the huge blue ocean for startups to do fine-tuning and monetize commercially.

Original video link

Click to learn about Rhythm BlockBeats’ recruitment positions

Welcome to join the official Rhythm BlockBeats community:

Telegram subscription group: https://t.me/theblockbeats

Telegram discussion group: https://t.me/BlockBeats_App

Twitter official account: https://twitter.com/BlockBeatsAsia

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin