Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Zhejiang University research team proposes a new approach: teaching AI how the human brain understands the world
Large models keep getting bigger, and the mainstream view is that the more parameters a model has, the closer it will come to the way humans think. However, a paper published by a team from Zhejiang University on April 1 in Nature Communications put forward a different viewpoint (original link: https://www.nature.com/articles/s41467-026-71267-5). They found that when the scale of the model increases (mainly SimCLR, CLIP, and DINOv2), its ability to recognize specific objects does improve further. But its ability to understand abstract concepts not only does not improve—it actually declines. After the number of parameters increased from 22.06 million to 304.37 million, the performance on concrete-concept tasks rose from 74.94% to 85.87%, while the performance on abstract-concept tasks fell from 54.37% to 52.82%.
The difference between how humans and models think
When the human brain processes concepts, it first forms a set of classification relationships. A swan and an owl look different, yet people still put them into the category of birds. Going further up, birds and horses can still be placed into a broader layer: animals. When people see something new, they often first think about what it resembles based on what they’ve seen before, and roughly which category it belongs to. People continue to learn new concepts, then organize their experiences, using these relationships to recognize new things and adapt to new situations.
Models also categorize, but they form them differently. They mainly rely on patterns that repeatedly appear in large-scale data. The more frequently a specific object appears, the easier it is for the model to recognize it. When it reaches bigger categories, the model finds it more difficult. It needs to capture the commonalities among multiple objects, then group those shared features into the same class. Existing models still have clear weaknesses here. After the parameters keep increasing, concrete-concept tasks improve, while abstract-concept tasks sometimes decline.
The commonality between the human brain and models is that both form a set of classification relationships internally. But the emphasis differs. The brain’s higher-level visual areas naturally separate broad categories such as living things and non-living things. Models can separate specific objects, but they struggle to stably form such larger categories. This difference makes the brain more likely to apply old experience to new objects. So when faced with something unfamiliar, we can classify it quickly. Models, however, depend more on existing knowledge, so when encountering new objects, they are more likely to stop at surface-level features. The method proposed in the paper is to build around this characteristic—using brain signals to constrain the model’s internal structure, making it closer to the brain’s way of categorizing.
The solution from the Zhejiang University team
The team’s solution is also quite unique. It is not about simply stacking more parameters; instead, it uses a small amount of brain signals for supervision. These brain signals come from recorded brain activity when people look at images. The paper’s original wording is: “transfer human conceptual structures to DNNs.” In other words, it teaches the model how humans classify, how they generalize, and how they group similar concepts together, as much as possible.
The team used 150 known training categories and 50 unseen test categories for experiments. The results show that as this training proceeds, the distance between the model and the brain representations keeps shrinking. This change appears in both categories, indicating that the model is not merely learning individual samples, but truly starting to learn a concept-organization method closer to the human brain.
After this training, the model has stronger learning ability with very few samples, and it performs better when facing new situations. In a task that provides the model with only extremely few examples, yet requires it to distinguish abstract concepts such as living things versus non-living things, the model’s average improvement was 20.5%, and it even outperformed control models with much larger parameter counts. The team also conducted 31 additional sets of dedicated tests, and several models showed improvements close to one tenth.
In the past few years, the familiar path in the model industry has been larger model scale. The Zhejiang University team chose another direction—moving from “bigger is better” to “structured is smarter.” Scaling does help, but it mainly improves performance on familiar tasks. Humans’ abstract understanding and transfer ability are also equally crucial for AI. In the future, we need AI’s thinking structure to become more similar to the human brain. The value of this direction lies in pulling the industry’s attention away from plain scale expansion and back to the cognitive structure itself.
Neosoul and the future
This points to a bigger possibility: AI’s evolution may not happen only during model training. Model training can determine how AI organizes concepts and how it forms higher-quality judgment structures. After entering the real world, AI’s next layer of evolution is only just beginning: how an AI agent’s judgments are recorded, how they are tested, and how it continues to grow and evolve through ongoing competition in the real world—like humans learning and self-evolving. This is exactly what Neosoul is doing now. Neosoul is not merely making AI agents produce answers; it places the AI agent into a system of continuous prediction, continuous verification, continuous settlement, and continuous selection—so that it keeps optimizing itself between predictions and results, keeping better structures and eliminating worse ones. What the Zhejiang University team and Neosoul are ultimately pointing to is actually the same goal: to make AI not only able to solve questions, but to have comprehensive thinking abilities and keep evolving.