Co-founder of Anthropic predicts the emergence of "self-developing AI" by 2028 - ForkLog: cryptocurrencies, AI, singularity, future

AI model wearing a distributing hat# Anthropic co-founder predicts the emergence of “self-developing AI” by 2028

By 2028, AI systems capable of independently developing and training their own successors without human involvement may appear on the market. This forecast was given by Jack Clark, co-founder of the company Anthropic

“This is very important. I don’t know how to process it. I’m coming to this conclusion reluctantly, because the consequences are so great that I feel overwhelmed by them, and I’m not sure society is ready for the changes implied by automated AI development,” he noted

Clark described a scenario of full automation of AI research—where the model itself:

  • sets research tasks;
  • designs experiments;
  • writes and tests code;
  • optimizes training;
  • improves the architecture of the next AI version

The expert called it “a Rubicon into an almost unpredictable future” and estimated the probability of such a scenario at 60% over the next two years

What the assessment is based on

Clark’s conclusion is built on the dynamics of several benchmarks:

  • SWE-Bench — a test for solving real engineering problems from a GitHub repository. By the end of 2023, the best models handled about 2% of cases; by the spring of 2026, the figure reached 94%;
  • CORE-Bench — reproducing results from scientific AI papers, including setting up the environment, running the code, and analyzing the conclusions. According to Clark, the benchmark is effectively “closed”: modern agents show about 95.5%;
  • MLE-Bench — performing ML tasks at the Kaggle level. The best agentic systems already reach 64–65%.

According to Anthropic’s co-founder, all three metrics show one thing: AI is rapidly moving from writing code for specific tasks to fully carrying out engineering and research tasks.

Growth in autonomy

Another argument is the increased duration of tasks that AI models are capable of performing without human intervention.

According to METR, in 2022, systems handled tasks that took humans tens of seconds. In 2024, the figure grew to about 40 minutes, and in 2025—to six hours. Today, leading models are able to carry out engineering work for about 12 hours straight.

Clark linked this to the spread of agentic tools for programming. The longer a model can hold onto a goal, check intermediate results, and correct errors, the more stages of the research cycle it can be delegated.

Why it matters for AI development

The modern AI development cycle is organized according to a single scheme: study materials, reproduce the result, assemble an experiment, train or fine-tune the model, check metrics, identify bottlenecks, and repeat. Growth on SWE-Bench, CORE-Bench, and MLE-Bench shows that models are already handling entire segments of such a cycle.

Clark also pointed to progress in more specialized tasks. For example, AI is beginning to be used for designing GPU cores—code that determines the efficiency of model training and inference on specific hardware

Another direction is fine-tuning of models. In the PostTrainBench benchmark, AI systems improve small open-source LLMs

As of the spring of 2026, the best neural networks achieve 25–28% of the target increase (human teams—51%). Clark considers the result significant: the benchmark is set by real instruction-tuned models created by experienced researchers.

Anthropic measured how its models optimize the training of LLMs on CPUs. Over a year, the acceleration increased from 2.9x (Claude Opus 4) to 52 (Claude Mythos Preview). A human typically takes four to eight hours for a similar task.

AI is already learning to manage AI

Clark noted that modern systems are starting to coordinate the work of other agents. This approach is already used in products such as Claude Code or OpenCode: one assistant distributes tasks among multiple sub-assistants, monitors them, and gathers the results

This is important for AI development: they rarely involve a single linear task—usually, it consists of dozens of parallel processes, including writing code and configuring the environment. If the model begins to manage such loops on its own, the degree of human involvement will drop sharply

Do neural networks need creativity

In the view of Anthropic co-founder, one of the key questions is what AI development is more like: discovering a general theory of relativity or building Lego.

Clark acknowledged that current LLMs are not yet capable of generating fundamentally new scientific ideas. However, for automating a significant portion of AI R&D, that may not be necessary

“Mostly, AI moves forward through the methodical execution of a certain cycle by humans: take a well-functioning system, scale some aspect of it, look at the errors that occur when scaling, and fix them. For this, very few non-standard ideas are required, and most of this process is similar to unattractive, draft engineering work,” the expert noted

Early signs of scientific contribution

Clark believes that AI models are already beginning to show early signs of scientific intuition. He provided several examples from mathematics and computer science:

  • a team of mathematicians used Gemini to check about 700 Erdős problems and obtained 13 solutions, one of which researchers called a “slightly non-trivial” contribution to an open problem;
  • scientists from the University of British Columbia, the University of New South Wales, Stanford, and Google DeepMind published a mathematical proof found with substantial help from Gemini-based tools.

What happens if the forecast is correct

Clark drew attention to the fact that the largest AI labs are already moving toward automating research. OpenAI intends to create an AI intern for independent scientific work, and Anthropic is releasing work on automatic tuning to human values

If the current pace is maintained, the expert predicted that the industry will move into a phase of full automation of AI development—an ongoing cycle will begin in which each new generation of AI accelerates the emergence of the next.

According to him, if the transition takes place by the end of 2028, the world will face not only a technological leap. Fundamental questions of safety, the distribution of capital, the role of human labor, and control over systems that are starting to evolve faster than their creators will also come to the forefront.

“If you made me name the probability for 2027, I would say 30%. If we don’t see this by the end of 2028, then, I think, we will find some flaw in the current technological paradigm, and a human invention will be required to move forward,” Clark concluded

Recall that in January, Anthropic CEO Dario Amodei predicted the imminent arrival of AGI and a reduction in jobs

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin