Zhipu Tangjie: Claude may have already completed autonomous training, with 2 million chips dedicated to self-evolution

According to Beating Monitoring, Zhipu AI founder and chief scientist Tang Jie posted on X predicting that the biggest breakthrough for large models this year will be solving long-horizon tasks, which involve continuous operation within agent environments to accomplish complex goals.

He pointed out that this capability will accelerate the industry’s evolution from “one-person companies” to “employee-free companies (NPCs),” and autonomous intelligent agent systems (AAS) will become the next technological frontier.

Tang Jie believes that achieving this vision requires crossing three major technological pillars: memory capabilities solved through ultra-long context and RAG, continuous learning achieved indirectly by shortening update cycles, and the self-judgment ability, which is currently the most difficult to break through but has already shown a prototype in Opus 4.7.

The ultimate goal of large models will be self-evolution. Tang Jie speculates that Claude may already have a “self-training baseline” capable of writing code, cleaning data, and training itself, and that next year’s rumored 2 million chip clusters are likely dedicated to autonomous training.

He predicts that future operating systems will be replaced by large model operating systems (LLM OS), and applications will become “on-demand generation,” thereby completely disrupting the traditional von Neumann architecture.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin