In a world where the human brain and AI coexist, what will education become?

In the 4th century BCE, Plato, in the “Phaedrus,” recorded Socrates’ warning: words would make people “no longer remember from within, but rely on external symbols,” and ultimately “forget the truth and only remember the images of words.”

In the 15th century, the German monastery abbot Trithemius, in “Notes on Copyists,” lashed out at printing: “We will lose the training of memory, become lazy and empty.”

In 2008, The Atlantic ran an article titled “Is Google Making Us Stupid?”: “We’re losing the ability to read deeply and stay focused for the long haul.”

In 2026, in a parent group for a middle school in Beijing, at midnight, a mother asked: “My child wrote an essay using AI in five minutes—what should I do?”

From writing to printing, from the internet to AI—four eras, the same kind of anxiety.

What’s obvious is that the first three anxieties have already been thoroughly disproven by history. Writing, printing, and the internet are simply tools that improve the efficiency of knowledge transmission. They have not weakened human learning ability—instead, they have greatly increased learning efficiency, even propelling the progress of all human civilization as a core factor.

But this time, things are a bit different.

AI is not only optimizing the medium through which humans reach knowledge—it completely replaces the entire process by which humans remember, reason, solve problems, and produce results. Overnight, every parent and teacher was pulled into deep anxiety: AI has become the top-tier cheating tool.

Earlier, Stanford professor Fei-Fei Li mentioned in a conversation with Titan Media (a exclusive interview by Zhao Hejuan with Fei-Fei Li: “What I believe in is human beings, not AI”): “AI is quickly proving that many things can be done by machines. For humans to spend ten, twenty, even dozens of years learning how to do things that machines can already do is a tremendous waste. People should use the time and energy released by AI to shift toward cultivating abilities that AI cannot replace—cognition, creativity, empathy, and the fundamental qualities of ‘how to be human.’”

And with the emergence of OpenClaw, the arrival time of a world where humans and AI coexist and collaborate has been brought forward by several more years. If the inevitable future is human-AI symbiosis, then we really don’t need to be fixated on protecting abilities that will surely be replaced by AI. As a result, we can skip the short-term anxiety of “Is AI a cheating tool or a learning assistant?” and instead discuss a deeper, more fundamental proposition:

When Agents become a necessary component of the future human, what kind of upgrade does a person’s own “ability system” need? How should the massive societal infrastructure of the education system be rebuilt? And after rebuilding, what new opportunities will the education system have?

What is the underlying capability for asking good questions?

In the future, the measure of someone’s ability will no longer be “what they can accomplish independently,” but “what unique value they contribute in collaboration with Agents.” This is not simply a relationship between “humans and tools,” but a transfer of the core capabilities of human beings.

Li Kejia, a founder who has been focused on “human-AI collaborative learning,” proposed a “90/10 model”: humans should delegate 90% of tasks—information transportation, organization, and initial screening—to Agents, while leaving the remaining 10%—asking questions, judgment, and decision-making—to humans.

In traditional education logic, the amount of knowledge stored is the primary metric for measuring students. But in the Agent era, this logic is becoming obsolete. When AI can retrieve the entire knowledge base of human civilization in a fraction of a second, “being well-read” is no longer a scarce ability, and “answers” gradually become less important. Everyone is saying: the ability to “ask good questions” becomes especially precious. Yet few people ask: what exactly is the underlying capability behind “asking good questions”?

Yang Linfeng, co-founder and chairman of Onion Learning, has a unique perspective. He believes this capability is rooted in a “skeleton-based” knowledge structure. “In fact, the people who truly learn knowledge thoroughly are not those who memorize facts about every knowledge point by rote. We used to have a habit when learning: we would write down the little bits and pieces the teacher mentioned, including content that wasn’t tested. For example, in a biology class about bees, I would thoroughly understand the differences and division of labor among worker bees, queen bees, and drones. These things might not be tested, but because what you remember is a bigger domain, knowledge connects with knowledge. As a result, you don’t easily forget all the knowledge points in that system. That is the power of the framework produced by systematic learning.”

“The value of ‘remembering knowledge points’ is definitely changing, but it isn’t becoming unimportant—it’s becoming something you need to remember differently: not isolated knowledge points, but the ways knowledge connects. Like building a house—AI can provide bricks and tiles, but students’ minds need their own architectural framework. A person can’t ask good questions about a completely unfamiliar domain, and the act of ‘asking good questions’ itself can’t happen without a knowledge skeleton to support it.

“The value of systematic learning lies in that it builds a cognitive scaffolding. With that scaffolding, when students learn with AI, they know what they lack and what they need to補充. Without a system, the so-called ‘AI Q&A’ would be like picking up shells on a beach—picking up many, but unable to assemble a complete picture.” Yang Linfeng added. “In my view, the future learning process will be a state where systematic learning and AI Q&A coexist—‘first build the skeleton, then fill in the flesh.’ Systematic learning provides the foundation, and Agents help you flexibly call upon and extend that knowledge in specific scenarios.”

This philosophy is also widely recognized across the industry. Li Kejia likewise believes: “The value of memory is no longer ‘storage,’ but providing the brain with ‘a structure for placing answers.’ The framework capability enables sharp awareness of information gaps. Students can only have the possibility to ask good questions if they can clearly see what’s missing. And only when students have seen enough frameworks can they develop the ability to break down a macro narrative into verifiable propositions.”

How do you develop ‘good taste’?

When Agents can generate ten versions of方案 in a matter of seconds, a human’s core value becomes “choosing one among many answers.” Many people call the ability to choose “good taste.” Professor Hu Xuming of the Hong Kong University of Science and Technology calls this taste “appraisal ability.” Whether it’s “good taste” or “appraisal ability,” it sounds like a capability that’s hard to teach in a concrete, transferable way.

A senior former education-investment professional, Jiang Feng (a pseudonym), told us: “The essence of appraisal ability is two kinds of capabilities—raising doubts and judging value.”

The most dangerous thing about AI isn’t making mistakes—it’s being fully confident even when it makes mistakes. Humans naturally have inertia. When AI produces answers, the average person’s first reaction is acceptance, not questioning. Neuroscience evidence shows that passive acceptance of AI outputs triggers long-term depression (LTD), weakening synaptic strength; whereas active questioning, improvement, and collaborative creative work promotes long-term potentiation (LTP), truly strengthening learning ability.

Professor Fei-Fei Li also mentioned in an interview: AI should not be a “test-taking machine,” but a tool for “Socratic-style dialogue.” Socrates never gives answers directly—he only asks questions, forcing you to figure out what you actually want. And the essence of the ability to follow up with questions is cultivating a reflex to doubt authoritative answers.

In teaching practice, Yang Linfeng started early to cultivate students’ abilities to question and follow up. “Onion Learning’s AI intelligent learning companion won’t think for the student. Instead, through Socratic-style questioning, it guides students to examine AI’s reasoning chain, helps them break down problems, and helps build logic.”

Another underlying capability mentioned by Jiang Feng—value judgment—can also be strengthened through training.

The concept of “final decision-maker” proposed by Professor Li Jianbiao of The Hong Kong Polytechnic University points directly to the core of value judgment ability: when AI can generate infinite options, humans must establish their own value coordinate system. It is reported that in teaching practice, PolyU not only encourages students to actively use AI in their assignments, but also requires students to independently express the process of collaborating with AI, as well as the judgment logic behind each choice.

Rebuilding teaching content and assessment systems

In an interview, Professor Fei-Fei Li also publicly called for: “Modern education—if it still uses methods from more than a hundred years ago to emphasize standard answers and knowledge infusions—will severely lag behind the times. People who care about education and can influence education policy, and people who carry out education in practice, should seize the opportunities of this era.”

It’s true that society’s demands on human abilities have already changed. Our existing curriculum system, assessment systems, and the roles of various actors within the education ecosystem should also undergo some changes.

And these changes are already happening.

In February 2026, the Ministry of Education held a national key-work deployment meeting for basic education, clearly stating: “Promote the entry of artificial intelligence into middle and primary school curriculum standards, daily teaching, and examination and evaluation.” And starting from the autumn term of 2025, Beijing and Shanghai have already incorporated AI general-knowledge courses into the curriculum systems of middle and primary schools. AI is no longer “an interest class,” but more like compulsory content alongside Chinese, math, etc.

In adult education, in March 2026, China Communication University announced the cancellation of 16 undergraduate majors, such as translation and photography. Most of these removed majors share the characteristics of “strong procedural components, weak creativity, and relatively low replacement cost by AI.” On the other hand, Tsinghua University established its School of Artificial Intelligence in 2024 and provided every incoming student with an AI growth assistant. Zhejiang University also included AI courses as general-education compulsory requirements into its undergraduate curriculum in 2024.

These changes reflect two trends in how AI impacts education content: one is the reconstruction of traditional disciplines, and the other is the cultivation of AI literacy.

When AI enables coding to be used by liberal arts students, and also helps science students better experience the beauty of language and art, the boundary between engineering and liberal arts is being blurred. Tsinghua University has recently even offered natural-language programming courses for liberal arts students, and indeed the boundaries are becoming less clear. But for more foundational disciplines—such as Chinese language, math, physics, chemistry, and so on—what impact will that have? If traditional subject separation is no longer necessary, what shape will basic education take?

“Traditional subject separation is still important. Each discipline has its own unique way of thinking. Math teaches logic, physics teaches causality, and Chinese teaches understanding and expression. These ways of thinking form a cognitive framework built through long-term training—which is the key capability behind ‘asking good questions’ that we discussed earlier. Without mathematical thinking, even with Agents helping, you can’t do complex quantitative analysis.” Yang Linfeng told us.

“The change is that the ‘walls’ between disciplines will become thinner. In the past, we taught by dividing subjects mainly because the amount of knowledge was too large—students couldn’t hold it all without separation. Now AI can integrate information across disciplines anytime. So our curriculum design can be more daring. For example, center it on ‘themes’ or ‘real-world problems,’ and connect knowledge from multiple disciplines. This aligns with the underlying logic of ‘big unit design’ that the new curriculum standards advocated more than ten years ago when we were developing courses.”

As for the AI-literacy courses that currently run throughout the entire education system, Jiang Feng believes these are only temporary products. “Just like microcomputer courses in the 1990s needed to be offered separately within the education system because the thing was still too new and its adoption level was far from sufficient.”

Li Kejia shares the same view. He believes AI literacy training should be embedded across all courses, because at its core, this course isn’t “about knowledge of AI,” but “the abilities cultivated through collaboration with AI.”

“We are already teaching students how to use Agents,” Yang Linfeng added. “But the approach isn’t to teach students to use AI directly. Instead, we design AI as an intelligent learning companion and embed it into the learning process. Students just learn math and physics normally, and the Agent helps alongside by explaining concepts, guiding thinking, and planning paths. During regular learning, students will naturally build the capability to collaborate with AI.”

With changes in teaching content and methods, assessment approaches and evaluation systems also face a revolution. Traditional closed-book exams are losing their meaning because they measure a capability that AI will ultimately replace completely—memory.

In fact, the storm of exam reform has already started.

At the beginning of 2025, the final-year mathematics exam for fourth graders in Nanshan District, Shenzhen sparked heated discussion. The exam paper contained a large number of long text problems. These problems focused on real-life issues, and because the reading load was too heavy, students found it difficult to complete them. Even the education department temporarily notified them to extend the exam time by 20 minutes. This sounds unrelated to AI, but it actually reflects a trend: in the future, exams will more likely assess the ability to solve real-world problems using disciplinary thinking, rather than how many facts you remember.

Correspondingly, the evaluation system across the entire education system will shift from “outcome evaluation” to “process evaluation.” Students use Agents to write an essay, but do they have the ability to judge whether the essay is good? Can they revise it to be better? Can they spot where the logic doesn’t work? These process capabilities may be what future education evaluation systems focus on more.

Changes in the role of teachers

The revolution in teaching content and evaluation systems lands directly on the most immediate role: teachers.

At the 2026 Chongli Forum, Yu Minhong, the founder of New Oriental, made a sharp judgment: “AI + education will very likely eliminate large numbers of teacher jobs. Under new standards, frankly speaking, a large portion of middle and primary school teachers in China today are not qualified.”

What are the new standards? Not who is more familiar with the test points, or better at predicting questions. When AI can grade assignments, practice oral language with students, and replicate master teachers’ lessons anywhere, what gets eliminated isn’t the profession of teachers—it’s the “teaching artisan” function embodied in teachers.

In May 2025, the Ministry of Education’s Basic Education Teaching Guidance Committee released the “Guidelines for the Use of Generative Artificial Intelligence by Primary and Secondary Students,” clearly defining AI’s role boundaries in the classroom: teachers may not use generative AI as a substitute teaching主体, and are prohibited from directly using AI to answer students’ questions. At the same time, teachers are required to actively participate in professional training to improve their AI literacy. There are three key points here, which clearly reflect three bottom lines for teachers’ roles:

The teaching主体 must be human;

It must not cultivate the habit of students directly getting answers from AI;

Teachers themselves must be proficient in using AI.

“The most realistic change in the teacher’s role is shifting from knowledge transmitter to a coach for asking questions,” Li Kejia said. “Stop being a搬运工 of knowledge. Return to the most precious ability—being a forger of students’ question-asking ability. In an era when AI can answer everything, the teacher who can teach students how to ‘follow up with questions’ is the rarest teacher.”

Onion Learning, which has 4 million teacher users, has a deep understanding of changes in the teacher role. “When we collaborate with schools, we find that teachers aren’t being replaced by technology—they’re being empowered by technology. We’ve been rolling out a new AI classroom model in schools: only let AI handle part of the basic explanation and practice feedback functions. Teachers put their energy into organizing discussions, diagnosing students’ difficulties, and providing emotional support—things that are more important for shaping abilities.”

From knowledge搬运工 to capability forger—within this shift there is an even more critical responsibility: supervising AI. “It’s not about supervising whether AI makes mistakes; it’s about supervising whether the entire learning process is truly effective for each student.” Yang Linfeng emphasized. “Throughout the education process, teachers also need to select among the assistance plans AI provides and make judgments.”

New opportunities in the education industry

The transformation of the education system is urgent, leaving companies in the education track with a new round of massive imagination space.

Jiang Feng believes the AI wave creates three new opportunities for companies in the education sector:

The first is high-quality teaching content design firms. Industry consensus is that whether it’s large models or Agents, if they only use public datasets from the internet, they can’t meet the needs of current educational scenarios. Just like the embodied intelligence industry needs high-quality data, the education sector needs professional teaching data and content design even more. This not only makes it easier for students to understand learning content, but also integrates the training of core capabilities—asking questions, following up, value judgment—into the process.

“What Onion Learning needs to do is use technology to reconstruct content into Agents.” Yang Linfeng is very confident about data and content. “Our confidence comes from 10k carefully designed course contents behind us, a deep understanding of student learning state accumulated from 500 billion interactions, and our deep insight into ‘how to teach children so they can truly learn.’ Without these, an Agent is just an empty shell.”

The second opportunity is classroom design. The key to classroom design is helping schools and teachers find the “golden ratio point” between human-AI collaboration and between capabilities and assessments. In this area, education technology companies are more capable than schools. Classroom design needs to redistribute, offline, the way and density of human-AI cooperation, re-position which roles participate and in what forms, and also differentiate from students’ own online independent learning experience. Its importance is no less than that of textbook writing in this era.

A deeper challenge is coupling with the assessment system. When exams also begin to emphasize critical thinking and human-AI collaboration literacy, how will school education keep up? This requires intervention during the design phase of offline teaching content—weaving “capability development” and “exam-prep necessities” into the same product logic.

The third opportunity lies in building value systems and humanistic literacy. When technology lowers the barriers to acquiring knowledge, education returns to its original form: cultivating a complete person. Curiosity, resilience in the face of setbacks, cooperation spirit, moral sense, and an appreciation for beauty—these qualities, submerged by exam-oriented systems in the old era, will become the biggest differentiating advantages among people in the AI era. And all related training systems should receive higher-priority attention from parents in the next era as well.

Back to that anxious mother

The mother who asked the question in the parent group at midnight may not need to be so anxious. History has proven that writing does not make people stupid, printing does not make people lazy, and the internet does not strip people of their ability to think. This time, AI probably won’t either.

What truly needs attention is not whether the child uses AI, but how we define “education.” The ability to ask questions supported by systematic learning and framework capability; the ability to choose supported by doubt and value judgment; and the human soul supported by humanistic literacy. Helping children build a stable foundation of these three layers of abilities is the real issue parents should care about.

The transformation of the education system won’t happen overnight. It requires policy to put down a century-long path dependency, teachers to complete the shift from “teaching artisan” to guide, and parents to strike a difficult balance between exam anxiety and long-term capabilities.

But for that mother and her child, the answer might be simple: first, ask the child to describe how they collaborated with AI, and why they think the article is good. Everything else can be left to time.

(Author | Tao Tianyu, Editor | Yang Lin)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin