Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Co-founder of Anthropic predicts the emergence of "self-developing AI" by 2028 - ForkLog: cryptocurrencies, AI, singularity, future
By 2028, AI systems capable of independently developing and training their own successors without human involvement may appear on the market. This forecast was given by Jack Clark, co-founder of the company Anthropic
Clark described a scenario of full automation of AI research—where the model itself:
The expert called it “a Rubicon into an almost unpredictable future” and estimated the probability of such a scenario at 60% over the next two years
What the assessment is based on
Clark’s conclusion is built on the dynamics of several benchmarks:
According to Anthropic’s co-founder, all three metrics show one thing: AI is rapidly moving from writing code for specific tasks to fully carrying out engineering and research tasks.
Growth in autonomy
Another argument is the increased duration of tasks that AI models are capable of performing without human intervention.
According to METR, in 2022, systems handled tasks that took humans tens of seconds. In 2024, the figure grew to about 40 minutes, and in 2025—to six hours. Today, leading models are able to carry out engineering work for about 12 hours straight.
Clark linked this to the spread of agentic tools for programming. The longer a model can hold onto a goal, check intermediate results, and correct errors, the more stages of the research cycle it can be delegated.
Why it matters for AI development
The modern AI development cycle is organized according to a single scheme: study materials, reproduce the result, assemble an experiment, train or fine-tune the model, check metrics, identify bottlenecks, and repeat. Growth on SWE-Bench, CORE-Bench, and MLE-Bench shows that models are already handling entire segments of such a cycle.
Clark also pointed to progress in more specialized tasks. For example, AI is beginning to be used for designing GPU cores—code that determines the efficiency of model training and inference on specific hardware
Another direction is fine-tuning of models. In the PostTrainBench benchmark, AI systems improve small open-source LLMs
As of the spring of 2026, the best neural networks achieve 25–28% of the target increase (human teams—51%). Clark considers the result significant: the benchmark is set by real instruction-tuned models created by experienced researchers.
Anthropic measured how its models optimize the training of LLMs on CPUs. Over a year, the acceleration increased from 2.9x (Claude Opus 4) to 52 (Claude Mythos Preview). A human typically takes four to eight hours for a similar task.
AI is already learning to manage AI
Clark noted that modern systems are starting to coordinate the work of other agents. This approach is already used in products such as Claude Code or OpenCode: one assistant distributes tasks among multiple sub-assistants, monitors them, and gathers the results
This is important for AI development: they rarely involve a single linear task—usually, it consists of dozens of parallel processes, including writing code and configuring the environment. If the model begins to manage such loops on its own, the degree of human involvement will drop sharply
Do neural networks need creativity
In the view of Anthropic co-founder, one of the key questions is what AI development is more like: discovering a general theory of relativity or building Lego.
Clark acknowledged that current LLMs are not yet capable of generating fundamentally new scientific ideas. However, for automating a significant portion of AI R&D, that may not be necessary
Early signs of scientific contribution
Clark believes that AI models are already beginning to show early signs of scientific intuition. He provided several examples from mathematics and computer science:
What happens if the forecast is correct
Clark drew attention to the fact that the largest AI labs are already moving toward automating research. OpenAI intends to create an AI intern for independent scientific work, and Anthropic is releasing work on automatic tuning to human values
If the current pace is maintained, the expert predicted that the industry will move into a phase of full automation of AI development—an ongoing cycle will begin in which each new generation of AI accelerates the emergence of the next.
According to him, if the transition takes place by the end of 2028, the world will face not only a technological leap. Fundamental questions of safety, the distribution of capital, the role of human labor, and control over systems that are starting to evolve faster than their creators will also come to the forefront.
Recall that in January, Anthropic CEO Dario Amodei predicted the imminent arrival of AGI and a reduction in jobs