Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
The act of writing code has basically been solved.
Written by: Boris Cherny
Inside Anthropic, Boris Cherny is jokingly called the “Father of Claude Code” by colleagues. He personally led the team to develop this deeply integrated large model programming assistant, and experienced the huge shift from “automatic code completion” to “agents writing 100% of the code” firsthand.
In this talk aimed at entrepreneurs and engineers, he systematically recounts the story of Claude Code’s birth, why “coding is basically solved,” and what changes will occur in the software industry and team structures under this premise.
From “Accidental Project” to Phenomenal Product
Boris joined Anthropic at the end of 2024. At that time, the company had an internal incubator-like team called Anthropic Labs. This small team later quickly developed several core products, including Claude Code, MCP, and desktop applications. After completing their mission, they disbanded, but now they have been recalled for a “second round.”
In the context of 2024, industry imagination about “AI coding” still mainly revolves around the IDE’s “type-ahead/autocomplete”—press Tab, and let the model help you complete a line. Boris’s intuition is: the model’s capabilities have already far exceeded this form, and the real “product form” is seriously lagging behind. This is what they internally call “product overhang.”
Therefore, Claude Code’s initial goal was very aggressive: not just to make a smarter autocomplete, but to let the intelligent agent directly take on the task of “writing all the code,” with humans mainly responsible for review and decision-making.
Of course, reality was not so smooth. During the first six months of developing Claude Code, almost no one truly loved using it. It could barely write about 10% of the code, and the experience was very rough—inside Anthropic, it was just an experimental tool. It wasn’t until May 2025, after the release of the Opus 4 model, that usage curves truly saw exponential growth. Each subsequent model upgrade (4.5, 4.6, 4.7) brought a clear “significantly better again” inflection point.
Looking back, what’s most special about this product is that it was designed from day one not for “current models,” but for “the next generation model six months from now.” The team knew that there wouldn’t be product-market fit (PMF) for a while, but they persisted in building the “right interaction” first, then waited for the models to catch up.
Why is “coding basically solved”?
On-site, Boris directly asked the programmers in the audience: who still write 100% of their code manually? Who uses Claude Code-like agents to write 100% of their code? Most are in the middle, and he jokes, “then it’s about 50% solved.”
But for himself, the answer is already very extreme: he now generates 100% of his code with Claude Code.
Claude Code’s own codebase is entirely written by the model, using a very conventional tech stack: TypeScript + React, with no fancy black tech.
One reason for choosing this stack is that, in the early days when model capabilities were weaker, using “mainstream tech in the model’s training distribution” could significantly improve generation quality.
As models have iterated, it can now learn new languages and frameworks almost effortlessly, and the choice of tech stack is no longer a bottleneck.
In his personal workflow, Boris can complete dozens of PRs daily. Once, he “pushed out” 150 PRs just to see how high his efficiency could go; behind all these PRs, the actual coding was done entirely by Claude. He plays the roles of product/architecture/reviewer.
Of course, he admits that this “100% solved” situation currently only applies to some scenarios:
Small, clear, mainstream codebases can already be fully handed over to the model.
Very large, complex, historical codebases, or niche languages and highly specialized engineering environments, still have obvious shortcomings in large models.
But his judgment is simple: most of these shortcomings are just a matter of “waiting for the next version of the model.”
A phone + thousands of agents: his personal workflow
Boris has shared his development environment on social media. Initially, he didn’t expect it to spark so much discussion, as he saw it as just a “natural evolution” of his work style.
Now, most of his work has even moved to his phone: opening ClaudeApp, switching to the Code tab on the left, he can see multiple parallel conversations. He usually maintains 5–10 sessions, each with many sub-agents, totaling hundreds; at night, there can be over a thousand agents running longer tasks in the background.
The key concept supporting this system is a seemingly simple command: /loop.
The essence of /loop is to let Claude schedule “automatically repeating tasks in the future” in a cron-like manner: set to run every minute, every 5 minutes, daily, etc.
With this loop, he built a complete “automatic maintenance system”:
A loop dedicated to “monitor PRs”: fixing CI, auto-rebasing, keeping PR list clean.
A loop responsible for “maintaining overall project CI health”: automatically locating and fixing flaky tests and other issues.
A loop that fetches user feedback from Twitter every 30 minutes, automatically clusters and organizes it, forming actionable feedback summaries.
In his description, loops are already like a future-oriented programming primitive: the simplest, most feasible form, yet very powerful. Coupled with recently launched routines (long-running workflows on the server that continue even when the computer is off), the model can continuously push project progress in the background.
Team structure: everyone is a “cross-disciplinary all-rounder”
When a person can use AI to write 100% of their code, increasing development efficiency by 10 to 100 times, team organization naturally changes.
Boris’s core judgment about future teams is: “Cross-disciplinary generalists” will be much more common than today.
Today, a “generalist” usually refers to someone who can handle multiple engineering domains—like iOS, Web, and Server—within the same person. His new trend observation is:
Generalists will cross more functional boundaries, such as: engineering + design, engineering + product + data science, engineering + finance/operations, etc.
In their Claude Code team, such a state has already appeared: engineering managers, product managers, designers, data scientists, finance, user researchers—all write code and heavily use Claude Code to advance their work.
In other words, everyone still has their professional depth, but “writing code” is no longer a privilege for a small group; it’s a basic skill everyone possesses, just like today’s Office or PPT skills.
This also points to a broader judgment: the threshold for software productivity will be drastically lowered, and the most domain-knowledgeable people will become the most advantageous “developers.”
For example, in accounting software, the person who should lead product design and logic isn’t necessarily the top engineer, but an accountant who deeply understands the business and can skillfully wield AI to write code, because “coding” becomes relatively easy, and “deep domain understanding” is the scarce resource.
From “programmer class” to “全民编程” (全民 coding): the printing press analogy
To illustrate the depth of this shift, Boris offers his favorite analogy from tech history: the impact of AI on software production is likely similar to the impact of the printing press in 15th-century Europe on text production.
Before movable type printing, only about 10% of Europeans could read and write. They worked around the power structures (kings, nobles, churches), doing “reading and writing on behalf of others.” Literacy was a highly specialized skill, inaccessible to most for their entire lives.
Within just 50 years after the invention of printing, the amount of text published in Europe exceeded the total of the previous thousand years, and the cost of a single book dropped by about 100 times. Over the next few centuries, as education systems and social structures evolved, global literacy rates rose to around 70%. Reading and writing shifted from a professional skill for a few to a basic ability for most.
Boris’s view is: software and programming are experiencing the same curve, and at an even faster pace.
In the past, writing software was a “highly specialized, extremely high-threshold” profession.
Now, coding will become a universal skill like “typing” or “sending texts.”
There will still be professional engineers and top-tier system architects, but societal division of labor will be fundamentally reshaped: many domain experts, entrepreneurs, and ordinary workers will be able to directly “collaborate with models to write software.”
Will SaaS face a “great extinction”?
When AI reduces the cost of writing software by 10 times or even 100 times, what will happen to existing SaaS products? Will there be a “SaaS great extinction”? This is one of Boris’s most frequently asked questions.
His answer is much more complex than a simple “yes/no.” He borrows the “Seven Powers” framework often mentioned in the Acquired podcast to analyze.
In his view, AI will rapidly devalue some business moats:
Switching costs: When you can quickly migrate data and rebuild workflows with models, the lock-in effects created by complex integrations and configurations will weaken significantly.
Process power: Many companies rely on process design and complex workflows as their competitive advantage. Large models are increasingly capable of understanding and improving processes, especially models like 4.7 that can “auto hillclimb” (iteratively optimize until reaching the goal), which excel at squeezing inefficiencies.
Meanwhile, some more fundamental moats will not disappear because of AI—in fact, they may become even more important:
Network effects
Economies of scale
Scarce resources (such as unique data, channels, or special qualifications)
Another key trend is: in the next 10 years, the number of startups capable of “building products comparable to big companies with few people” will increase significantly—possibly tenfold compared to the past decade.
The reasons are:
Big companies face huge inertia and internal resistance when restructuring processes and retraining staff to use AI.
New teams can be “AI-native” from day one, building high-value products with very few people, and exerting a dimensionality reduction effect on traditional vendors in many niche fields.
In his view, this era is very friendly to entrepreneurs and developers—“it might be one of the best times to build products and start a business.”
How does Anthropic “eat its own dog food”?
Many think that a model company like Anthropic would use “more powerful secret versions” internally, staying ahead of the outside world for a long time. Boris’s view is exactly the opposite:
At the model level, they use the same versions as everyone else (e.g., heavily rely on Opus 4.7), with only limited experiments on research models like Mythos. They do not rely long-term on a “private version” that’s hard for outsiders to access.
The real advantage, in his opinion, lies not in the models themselves, but in the organization’s deep integration of AI.
Specifically:
There are no longer “pure hand-coded” practices internally— even SQL queries are generated by models.
Different teams’ Claude instances “chat and collaborate” on Slack, helping engineers fill gaps and communicate across teams.
Many workflows are reconstructed around mechanisms like loops, sub-agents, routines, enabling models to continuously push work in the background.
Because of this, he believes the biggest current “gap” is not in technology availability, but in organizational and process design. For startups, this is a huge opportunity: instead of gradually transforming old workflows, they should design their organization from day one to be “AI-native.”
Product opportunities for the next 6–12 months
Returning to product and startup questions: if a few years ago he saw an “overhang” in programming products, where is the next overhang today?
He mentions several directions:
ClaudeDesign: a direction already usable now, but likely to become more impressive with model iterations. It represents the “deep AI-ification of design workflows.”
Loop/Batch/large-scale parallel agents: enabling hundreds or thousands of tasks to run simultaneously on different agents, becoming a standard capability rather than a niche trick.
ComputerUse (models directly controlling computers): using vision + control capabilities to make models operate local software like humans, a universal solution for legacy systems without APIs or MCP.
The common feature of these directions is: they are “barely usable” today, but the true explosion point may come after one or two generations of models.
Just like Claude Code in the early days, ambitious teams can start designing product forms for “future models” now, seizing the lead when models catch up.