a16z Founder: In the Agent era, what truly matters has changed

Original video title: Marc Andreessen Reflects on the Death of the Browser, Pi + OpenClaw, and Why “This Time Is Different”

Original video source: a16z, Latent Space

Original transcription: FuturePulse

Signal source:

This is the latest interview with a16z founder Marc Andreessen on the Latent Space podcast.

He is a well-known American internet entrepreneur—one of the key figures in the early development of the internet; and after founding a16z, he also became a representative figure among Silicon Valley’s top investors.

The entire conversation focuses on the history of AI development and the latest trends, and it’s well worth reading.

I. This round of AI isn’t something that suddenly appeared out of nowhere—it’s the first time it has truly “started working” after 80 years of technological marathon

· This round of AI isn’t something that suddenly appeared out of nowhere—it’s after 80 years of a technological long-distance run

· Marc Andreessen directly refers to the present as an “80-year overnight success.” What he means is that the sudden explosion people see with their own eyes is actually the concentrated release of decades of technological reserves.

· He traces this technical thread back to early neural network research, and emphasizes that today the industry has effectively already accepted the judgment that “neural networks are the correct architecture.”

· In his telling, the key turning points aren’t a single moment, but a stack of successive milestones: AlexNet, Transformer, ChatGPT, reasoning models, and then agents and self-improvement.

· He particularly emphasizes that this time isn’t just that text generation has gotten stronger—the four functions are emerging at the same time: LLMs, reasoning, coding, and agents / recursive self-improvement.

· The reason he believes “this time is different” isn’t that the narrative is more appealing—it’s that these capabilities have begun working on real-world tasks.

II. The agent architecture represented by Pi and OpenClaw is a deeper software architecture shift than chatbots

· He describes agents very concretely: fundamentally, it’s “LLM + shell + file system + markdown + cron/loop.” In this structure, the LLM is the core for reasoning and generation; the shell provides the execution environment; the file system saves state; markdown makes the state readable; and cron/loop provides periodic wake-ups and task progression.

· He believes the importance of this combination is that, besides the model itself being new, the other components are all parts that already matured in the software world—they’re understandable and reusable.

· Since an agent’s state is stored in files, it can migrate across models and across runtimes; the underlying model can be swapped, but the memory and state remain.

· He repeatedly stresses introspection: an agent knows its own files, can read its own state, and can even rewrite its own files and functions—moving toward the direction of “extend yourself.”

· In his view, the real breakthrough isn’t only that “models can answer,” but that agents can leverage the existing Unix toolchain to bring the potential of the entire computer into play.

III. The era of browsers, traditional GUI, and “humans manually clicking software” will gradually be replaced by agent-first interaction

· Marc Andreessen has said explicitly that in the future “you may no longer need a user interface.”

· He goes further: the main users of software in the future may not be humans, but “other bots.”

· That means many interfaces designed today for humans to click, browse, and fill forms will degrade into an execution layer that agents call from behind the scenes.

· In this world, humans are more like people who propose goals: telling the system what they want, and then letting the agent call services, operate software, and complete the workflow.

· He ties this change to a broader future of software: high-quality software will become increasingly “abundant,” no longer a scarce item painstakingly built by only a small number of engineers.

· He also predicts that the importance of programming languages will decline; models will write code across languages, translate between them, and in the future humans may care more about explaining why AI organizes code this way rather than clinging to one particular language.

· He even mentions a more aggressive direction: conceptually, AI may not only output code—it may directly output more low-level binary code (binary) or model weights (model weights).

IV. This AI investment cycle is similar to the 2000 internet bubble, but the underlying supply-demand structure is different

· When looking back at 2000, he emphasized that the crash was, to a large extent, not because “the internet was not working,” but because telecom and bandwidth infrastructure was overbuilt—fiber optics and data centers were laid ahead of time, and then there was a long period of digestion afterward.

· He believes that today there are indeed concerns about “overbuilding” as well, but the current investment players are mainly cash-rich big companies like Microsoft, Amazon, and Google—not highly leveraged fragile players.

· He specifically points out that once investment forms around GPUs that can run, it usually translates into revenue very quickly—this is different from the lots of idle capacity in 2000.

· He also emphasizes that what we’re using now is actually a “sandbagged” version of the technology: because of insufficient supply of GPUs, memory, data centers, and so on, the full potential of the models hasn’t been fully unleashed.

· In his judgment, in the coming years the real constraints won’t be only GPUs, but also the bottlenecks created by the interaction among CPU, memory, network, and the entire chip ecosystem.

· He puts AI scaling laws side by side with Moore’s Law, believing they don’t just describe patterns—they also continuously spur the coordination and forward progress of capital, engineering, and the industry.

· He mentions a counterintuitive but important phenomenon: as software optimization speeds up, some old-generation chips may even become more economically valuable than when they were just bought.

V. Open source, edge inference, and local running are not side issues—they’re part of the AI competitive landscape

· Marc Andreessen clearly believes open source is very important—not just because it’s free, but because it “lets the whole world learn how it gets made.”

· He describes open-source releases like DeepSeek as a “gift to the world,” because code + papers spread knowledge rapidly, raising the baseline across the entire industry.

· In his account, open source isn’t only a technical choice—it can also be a kind of geopolitical and market strategy: different countries and companies may adopt different openness strategies based on their own business constraints and goals for influence.

· At the same time, he emphasizes the importance of edge inference (“Edge inference”). In the next few years, the cost of centralized inference may not be low enough, and many consumer-grade applications won’t be able to bear long-term high cloud inference costs.

· He mentions a repeating pattern: models that seem “impossible to run on a PC” today often end up actually running on local machines a few months later.

· Besides cost, the drivers for local running also include trust, privacy, latency, and use cases: wearables, door locks, and personal devices are all better suited for low-latency, on-device inference.

· His assessment is very direct: almost everything that comes with chips in the future may come with an AI model.

VI. AI’s real challenges aren’t only about model capability—they’re about security, identity, money flow, organizational and institutional friction

· On security, his judgment is very sharp: nearly all potential security bugs will be easier to spot, and in the short term there could be a stretch of “computer security disasters on a massive scale.”

· But he also believes that programming agents will scale the ability to patch vulnerabilities; in the future, the way to “protect software” may be to have bots scan it and fix it.

· On identity, he thinks “proof of bot” is not feasible, because bots will keep getting stronger. The truly feasible direction is “proof of human,” combining biometrics, cryptographic verification, and selective disclosure.

· He also brings up a frequently overlooked issue: if agents truly need to handle business in the real world, they will ultimately need money and payment capability, and even some form of banking account, cards, or stablecoin-like infrastructure. At the organizational level, he borrows the framework of managerial capitalism and believes AI may reinforce founder-led companies, because bots are especially good at generating reports, coordinating, handling paperwork, and a large amount of “managerial work.”

· But he doesn’t think society will quickly and smoothly accept AI: he gives examples such as professional licensing, unions, dockworkers’ strikes, government agencies, K-12 education, and healthcare—showing that the real world has many institutional “slowdown devices.”

· His conclusion is that both AI utopians and doomsayers tend to overlook one point: once a technology becomes possible, it doesn’t mean all 8 billion people will immediately change along with it.

Original video link

Click to learn about Rhythm BlockBeats job openings

Welcome to join the official Rhythm BlockBeats community:

Telegram subscription group: https://t.me/theblockbeats

Telegram discussion group: https://t.me/BlockBeats_App

Twitter official account: https://twitter.com/BlockBeatsAsia

PI0.57%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin