🎉 The #CandyDrop Futures Challenge is live — join now to share a 6 BTC prize pool!
📢 Post your futures trading experience on Gate Square with the event hashtag — $25 × 20 rewards are waiting!
🎁 $500 in futures trial vouchers up for grabs — 20 standout posts will win!
📅 Event Period: August 1, 2025, 15:00 – August 15, 2025, 19:00 (UTC+8)
👉 Event Link: https://www.gate.com/candy-drop/detail/BTC-98
Dare to trade. Dare to win.
Ma Zhaoyuan Interpretation of AIGC: Domestic companies want to catch up with OpenAI this year? at least 4-5 years
Text: Li Haidan, Tencent Technology
**【Editor’s Note】**In recent months, AIGC has been favored by capital, ChatGPT has become popular, and major domestic technology giants and start-up companies have entered the market one after another. Artificial intelligence is quietly affecting all walks of life. This is no longer just a simple technological evolution, but will become a revolution.
Although the explosion of AI applications seems to be sudden, behind the scenes, scientists have experienced countless failures and experience accumulation. Just like our guest speaker ** Ma Zhaoyuan (Professor of Southern University of Science and Technology, Fellow of the British Society of Physics, British Royal Chartered Engineer, former chief researcher of the Future Laboratory of Tsinghua University)** in the book "The Impossibility of Artificial Intelligence" Mentioned: "Every inch of progress requires hard work every day and the superposition of various uncertainties."
In fact, in the long history of science and technology, this technological revolution is only the first step to lead mankind to the "era of generative AI". So far, we still need to pay attention to many existing problems and contradictions. For example: Under the wave of technology led by OpenAI, can China quickly catch up? What changes will this technological revolution bring to society and industrial structure? Will machines replace humans in the future? In such an era of rapid technological iteration, how should China's talent education be cultivated?
In this issue of "AI Future Guide North", Tencent Technology interviewed Ma Zhaoyuan. He shared different thoughts and views on these issues. We compiled this 4D memorabilia into an article to explore, learn and think together with netizens. The significance of artificial intelligence to human beings.
Don't worry about whether people will become "machine slaves", the hot topic is capital hype behind it
With the recent boom in the application of products such as artificial intelligence large language models, many people are worried that with the development of artificial intelligence, it will bring some risks and safety issues to human beings, and conflicts with human survival and society.
Regarding the problems that people are worried about, I thought of a word called "machine slave".
We are worried that one day in the future, we will become "machine slaves", just like some plots in the movie "RoboCop": one day, human beings will be ruled by machines and have to move to live underground. Throughout this struggle, humanity has maintained its rebellious spirit, renewing its efforts to regain control of the future and Earth. In the end, AI had to send a robot to travel through time and space to 1984, trying to destroy the leader of the Resistance Army before it was born, and wars between robots and humans broke out in the future... Stories like this, Many people may have such a similar imagination.
For such views and concerns, I think it is completely unnecessary. First of all, judging from the current level of technology, it is unlikely that robots or intelligent entities will truly "rule the earth", even become the master of human beings. Even if they really become the masters of human beings at some point in the future, it will take a long time to realize it. We don't need to publicize this matter now to cause everyone's worries.
In addition, although there are currently many industry leaders discussing related topics. But I think it may be, to some extent, driven by economic interests and the capital market behind it. **Because capital must first speculate on a theme before it can profit from it. For example, the previous historical event in the Netherlands in the 17th century - the "tulip bubble", hyped up a fictitious concept that was not of much value, and then someone harvested leeks from it. But actually, the subject may not be as scary as we think.
In the past ten years, we have seen that China's capital market has introduced and hyped many themes, such as graphene materials, then virtual reality, bitcoin, metaverse, etc., and this year it is artificial intelligence. Every year we will hype a new term . In this process, capital predators may benefit from it, and the capital is getting bigger and bigger. But for ordinary Xiaobai, we just follow these nouns. If we don’t understand the operating logic and the real situation, we may lose almost all the money in our pockets.
Recently, ChatGPT has exploded. It is an important technological advancement, **in essence, it has changed the way we humans interact with computers. **If you want to talk about revolution, from the perspective of the industry's release pattern, GPT has changed the fate of OpenAI, Microsoft, and Google. Now Google is very nervous about the rise of GPT. The emergence of GPT has greatly improved the accuracy and efficiency of people's search. In the past, when we used Google to search, the Google homepage would display all possible web pages related to it, there may be hundreds of pages, or even tens of thousands of results. But the current model may understand the user's question more precisely and give the most likely answer, which greatly improves the information retrieval ability. Next, it may change our working mode, especially in file processing and data collection, which can greatly improve our work efficiency.
But when it comes to whether it will develop into general artificial intelligence, it is still too far away from the current point of view. As mentioned in my book (referring to "The Impossibility of Artificial Intelligence"): There is a core and critical gap between general artificial intelligence and humans, and existing AI technology cannot bridge this gap. As for the reason, it will be mentioned below (Part 06).
The domestic market lacks the mentality of building an ecosystem, and it will take at least 4-5 years to catch up with OpenAI
We have seen the launch and launch of many large language model products. For example, the popular ChatGPT is doing very well. Many large companies in China are also following suit and launching many similar products.
For example, a certain company launches a product, and a few days later another company launches a similar product, which is cheaper and suitable for the Chinese environment, all hyping with similar content. I think that many large domestic companies lack the mentality of building an ecosystem, and are eager to make money quickly. Once a new term appears, they will follow up quickly.
Such follow-up phenomenon is very unfavorable to the development of national economy or regional economy. If we keep chasing trends, we may never catch up. For example, Microsoft has been concentrating on research in the past ten years, but has not made many breakthroughs in other fields, and has basically been suppressed by other companies, which requires a lot of uncertainty and pressure. Microsoft has been "holding back" for ten years. With its financial resources, research and development capabilities, and business cooperation strategies, we finally saw the launch of ChatGPT and related new products. It took a long time to produce a "burst point", and we expect overnight It is unrealistic to expect that a company can achieve radical change.
Could it be that these companies are special "talented players" who only need a few months or a year or two to complete the same work that takes others ten years to complete? Are these companies not aware of these issues we are considering? Also do you need a hype question to push up the stock market and stock price? All of these require ordinary people like us to think carefully, consider whether we should follow suit and make corresponding investment decisions.
** As far as the state of the country's economic development is concerned, if you follow the trend, you may never catch up with those who ran first. Instead, we should think about how to carry out mutual ecological co-construction based on our current capabilities. This question is worth thinking about for domestic technology giants. **
To give an example: Over the past 40 years, China has made great achievements in infrastructure construction. For example, China's high-speed rail has developed very well, and it has become a high-speed rail system that needs to be considered globally, and it has become a business card of China. As another example, in the communications and computer industries, China's 5G is already the strongest in the world. In these fields, we have some advantages, and the basic construction has been done very well. At present, we can also build some possibilities and ecological development directions on these foundations, so as to exert greater advantages.
We can imagine that, based on our advantages, in the next 10 or 20 years, if some countries in Europe and the United States monopolize some technical directions, we can also enter a state of "mutually stuck necks" based on our leading fields. Gain negotiating position. Returning to technology, whether domestic companies want to catch up with GPT or Bing, it may be based on the experience of "predecessors",** we can accelerate the speed of research and development, but if we catch up to the same level, it may take at least 4- 5 years. **In addition, we also need to think about: Even if we catch up, after a few years, has this gust of wind passed? And Microsoft or Open AI itself is also growing iteratively, has become a "big mac"?
**In general, follow-the-trend investment is actually a waste of cost, time and talent. For participants, in addition to increasing panic and anxiety, they may not get any benefits, and it will only allow capital to reap part of the interests of investors in the process of following the trend. **
AI will not cause a large number of people to be unemployed, and the occupations of any era need to be created by humans
In addition to the AI threat theory, people are more concerned about the impact of AI on employment.
The highly anthropomorphic nature of ChatGPT triggered a wave of industry crises, and many people began to worry about whether their industry would be impacted, or even lose their jobs. From the current point of view, some jobs may be affected in the short term, but it will not disturb the long-term stability of society.
From the perspective of career direction, will the artificial intelligence revolution lead to a decline in the population and labor force? The answer is no.
For example, during the Industrial Revolution, roughly between the 18th and 19th centuries, one of the biggest changes was the decline in the agricultural population. In the early days of the Industrial Revolution or before it started, more than 95% of the population on the planet was engaged in agricultural labor, and only a small part were rulers or priests. At that time, the vast majority of people were mainly farmers.
The situation is different now. Take developed countries as an example. For example, less than 2% of the population in the United States is engaged in agriculture. More than 90% of the population in the middle has changed production content. Although some jobs disappeared, these people did not disappear, and the population increased instead. Judging from the impact on human society, such changes did not cause long-term unemployment or even affect the instability of human society. Many people just changed the jobs they were engaged in, and what changed was the mode of production of human beings. Although it may lead to a significant reduction or even disappearance of some positions, but also because of this change, more demand for other positions will be triggered and added.
As another example, we mentioned an English word called "Computer". A machine may come to mind when we hear the word, but 70 years ago, the word "Computer" meant something different than it does today. At the time, it referred to people working on computing similar to the Manhattan Project.
There were no computers in that era, and there were no desktops like the ones we use now, but it required a lot of calculations to complete huge projects, so the company in charge of this project hired some young and careful women to use slide rules in a special room Doing a lot of computing work with scratch paper, these women are called "computers".
The term "Computer" was coined for people who work in computing, referring to workers who do a lot of computing in offices. Later, with the advent of the computer, to this day when we refer to "Computer", we know it refers to the computer, not the women who work in computing. Thus, the meaning of the word "computer" changed completely, and it came to refer to pure machines.
**So, as machines become more capable in certain areas, they may replace some of our human jobs, leading to the disappearance of some jobs. However, these people don't really disappear, they move on to more complex jobs or other needs. **
**One of the greatest characteristics of human beings is to constantly innovate and create new needs. **These new needs will trigger people to create new job opportunities. We don't need to panic or worry too much about unemployment. Throughout the history of human society, we have been adapting and responding to this change. We have the ability to continually create and adapt to new work environments and employment opportunities.
From the perspective of professional technical requirements, with the development of large-scale model language applications, the completion of programming tasks will become easier and simpler. For example, with the help of tools such as ChatGPT, we only need to propose what we want to do and issue a clear task, and GPT will help us complete the rest of the work. For example, we can directly ask GPT, and it will use its programming and retrieval capabilities to directly generate code. That way, a system like Python might not even be needed anymore. In short, one simply describes the requirements and proceeds to implement those described ways.
**In addition, judging from this trend, the degree of automation of programming in the future will become higher and higher. **It's like the assembly language I learned when I was learning programming. Now most young people may no longer know how to write it. It is the same reason.
Assembly language is a high-level language between human language and machine language, which includes assembly and direct programming in machine language. After the assembly language, languages such as C, C++, and Java appeared, and then gradually developed to languages such as Python. When I communicate with students, I find that Python is a very loose language for those of us who have learned C, but it has become a favorite tool for students who are not used to using C anymore. Under the influence of some theoretical systems, different engineers still have differences in the comprehensibility of AI. And we may still need some professionals to continuously improve the system in the background, so how to unify the standards of intelligibility in order to get the desired results.
In general, ** no matter in any era, new needs are created by us human beings. **We cannot adopt a static way of thinking that focuses on substitution and conflict for human beings. If there are only limited jobs and needs on the earth, when machines take over these jobs, we humans may really have no meaning for existence. But really, the great thing about being human is our ability to continually create new needs and have those needs met through humans.
Competing with AI is meaningless, more attention should be paid to its policy constraints and risk management
Now, both China and the United States have begun to introduce some related regulatory mechanisms. Technological development in any period requires certain policy constraints and risk management.
Let’s take the history of the automobile as an example. Before 1900, there were very few cars, only the very rich could afford them, and they didn’t have much impact on society. In addition, the speed of the car is not fast, for example, it can only travel more than ten kilometers per hour, so it is not much different from walking, so there is no need to make too many rules for it, just let it develop.
However, with the introduction of assembly line production by the Ford Motor Company, the cost of cars has been greatly reduced, ordinary people can drive, and the number of cars has increased significantly. The speed of cars has also increased from more than ten kilometers per hour to hundreds of kilometers per hour. At this time, cars may become dangerous and involve some safety issues. Therefore, we humans need to formulate rules for cars. For example, design special roads for it, it can no longer mix with pedestrians, and even need to build highways for it, and set traffic lights, traffic lights, etc. on the roads where people drive, all these rules came into being.
In the same way, for a machine, we design it to allow it to collect and organize data at high speed, and perform rapid deduction and logical thinking. Just like we design cars to go fast. Once the car comes along, we don't need to compete with it to see who is faster.
Therefore, when a computer has such powerful data collation and processing capabilities, it is meaningless to compete with its expertise in a specific field. We need to set rules for it more.
For example, the "Midjoury" technology that people in the industry have paid attention to recently can be used for image generation and voice imitation, and can even make news. So when these video content and news are disseminated on the Internet, how to regulate them and how to ensure their validity? This becomes a matter of developing rules over time. The formulation of these rules makes how humans and machines co-exist becomes a real problem.
These issues require us to start thinking today and reach a consensus. Now that cars already exist, the earth is a state of symbiosis between humans and cars. Therefore, we need to formulate car traffic rules to ensure the coexistence of humans and cars in cities or specific environments. In this process, not only cars need to obey the rules, but humans also need to obey the rules.
The rise of AI will take up energy and resource consumption, but it is worthy of recognition for the improvement of human efficiency
In this interview, a question of energy structure was mentioned: From the perspective of industrial economic structure, with the explosive growth of AIGC, more computing power is needed, and more electricity and hydraulic support are needed. Is this the case? Will it lead to changes in the layout of related countries or the global energy structure?
This is what is sure to happen. When new industrial structures and demands emerge, this is an inevitable result. The problem lies in how to arrange and adjust. If AI consumes computing power, it needs to provide enough energy for it. In this energy process, it involves the consideration of the green earth and energy consumption structure. I think it is not particularly related to the development of AI, but a natural situation.
According to relevant data, the combined annual power consumption of China's cloud centers may be equivalent to the power generation of two Three Gorges Power Stations (the resource consumption of cloud centers is not limited to AI support, and even the proportion of AI services is relatively small). As the amount of computation increases, the demand for electricity increases further. In addition to providing more new energy supplements, we also need to consider how to improve energy efficiency, which is actually a relatively complicated issue. In terms of energy savings, calculations need to be done, and cooling needs to be done. However, combined with the current domestic situation, due to the rapid development in the past few years, half of the energy we supply to the data center is used for heat dissipation. This is something we need to consider.
How should we solve and avoid the problem of unreasonable resource occupation and allocation? Let me give another example: After Google acquired Deepmind, the Deepmind team was asked to do one thing, that is, to adjust Google's cloud center for energy saving through reinforcement learning and many other AI algorithms. Doing so actually helped Google reduce energy consumption by nearly 50%. Therefore, almost 100% of the electricity in Google's cloud center is used for computing, and only a very small part (less than about 5%) is used for cooling. Therefore, this form of optimization saves the energy waste of Google's cloud center on a large scale.
Therefore, if we can achieve a level of efficiency similar to Google's cloud center use, and consider double carbon and the green energy advocated by the world, we may still consider how to use energy efficiently in the future.
It should be noted that we are only discussing energy consumption in this question. In general, AI can indeed help us to greatly improve our use efficiency. Once it is widely used, the possible efficiency improvement is far more significant than its energy consumption impact.
How does AI understand human language? Through three modes of logical reasoning
The current deep learning model, especially the large-scale language model that has appeared recently, is still a "black box technology". Although large language models perform well on many tasks in natural language processing, we still need to find an interpretable method.
In scientific work, we are usually used to connecting phenomena to other things, and if we can describe them in a concise and beautiful formula, we can show that we understand. However, judging from the interpretability of the current large language model including neural networks, its parameters will be very random, and if the parameters are changed very slightly, the results will also change greatly. Although these parameters play a role in the architecture, their exact mechanisms are not fully understood. We cannot describe them with simple algebraic models, and in that sense it is not better understood.
We normal people (non-professionals) are not used to using lots of numbers to describe the relationship between two things and how a change in each number leads to a result. When the relationship is not clear, we think that the state has not yet reached the point of understanding. Therefore, people often confuse the concept and think that large language models or neural networks have not yet been understood. In fact, they are not entirely ununderstood, it's just that we haven't found the satisfying way we're used to understanding them.
At present, GPT is more based on the training of big data. The main way is to learn to judge the answer we most likely want based on probability. Whether its current reasoning form is feasible and reliable, we can look at it from these aspects:
First of all, the possible answer is given based on the maximum probability. The method involving neural network and Bayesian statistics at the algorithm level is the logic used by GPT in the background, and it is correct.
Also, when it comes to logical reasoning, we can divide logical reasoning into three different modes, not limited to logical bands.
When we humans perceive the world, there are three different ways:
The first is deductive reasoning, which leads to strictly correct conclusions. A machine can perform deductive reasoning much faster than we can because it is based on four fundamental principles of classical logic: the law of absolute identity, the law of contradiction, the law of the excluded middle, and the law of causality. **
Based on these four principles, deterministic conclusions can be derived. However, the problem with deterministic conclusions is that in logic it is called a tautology, that is, a known fact stated again in another way. From a deductive reasoning point of view, the answer is already implied in all your assumptions, it's just expressed in another way.
One thing we need to understand is that, in fact, a Turing machine was designed for this, it's a classical deductive logic machine. In 1936, British mathematician Turing published an important article "On Computable Numbers and Their Application to Decision Problems", marking the birth of Turing Machine. The operation of a Turing machine is very similar to the thinking process of our written calculations. The Turing machine model is by far the most widely used classical computing model, not one of them.
As of today, artificial intelligence is still based on Turing machines. Things that Turing machines can't do, no matter how powerful today's computers are, can't do them. This is one of the cores of our thinking about the division of labor between humans and AI. **
**The second mode is called induction. **Induction is the process of observing multiple events and finding out their common characteristics, and summarizing them into new knowledge. However, induction cannot be achieved through strict logic because it is impossible to exhaust all possibilities. Therefore, there may be a so-called "black swan event", that is, we observe that swans in Europe and America are all white, and thus conclude that swans should be white. But when we find that there is a black swan in Australia, the induction method cannot give an absolutely correct conclusion, because it cannot cover all possibilities. Machines are limited in this regard and cannot go beyond the limits of induction, but humans can. However, we must also understand that this conclusion may be overturned, which is what modern science pursues.
The third mode is analogy, which is a loose way of reasoning by associating one thing with another. For example, when thinking about the structure of DNA, if we don't know what it looks like, and see two snakes intertwined in a dream, we may think of the structure of DNA. In fact, the double helix structure of DNA is really "covered" in this way. But for computers based on deductive logic, this cannot be achieved. Analogy is a less rigorous way of reasoning, but as humans, we can use it.
From these three modes, we can draw the conclusion that machines are far more efficient than humans in performing deductive logic, because they operate on Turing machines and are complete computer systems. However, machines cannot generate any new knowledge, and new knowledge needs to be obtained by humans through loose induction or analogy. These views need to be demonstrated step by step through deductive methods, and finally transformed into relatively stable knowledge. Cognitive machines cannot surpass humans in acquiring new knowledge. And when we say that the machine cannot do it, it means that the machine cannot handle it from areas other than strict deductive logic, and these areas are exactly what humans can handle.
This actually involves the discussion of the division of labor between humans and machines. Whether it is artificial intelligence or machines, they are all developed based on Turing machines, and the problems mentioned above are inevitable. The current development of artificial intelligence is based on Turing machines. If artificial intelligence cannot achieve certain tasks, it may be due to the limitations of hardware development such as Moore's Law, or other related limitations.
Recently, Sam Altman, CEO of OpenAI, stated that the amount of global artificial intelligence computing is doubling every 18 months. In this regard, some people believe that the computing power performance of artificial intelligence will continue to achieve exponential improvement. In fact, ** about "Moore's Law" and algorithms are two different propositions. Moore's Law mainly refers to the development of hardware, but the algorithm does not fully comply with the laws of Moore's Law. Because of the technical problems of precision equipment processing, today's Moore's Law has slowed down in a sense**, and it involves more challenges in mechanical technology. As for the development of algorithms, it is hard to say that it is carried out in accordance with Moore's Law, and there are some differences between the two.
When it comes to Moore's Law, we can go further, and when the computing unit reaches the atomic level, we enter another field, which is quantum computing. From the field of quantum computing, and combining our progress in recent years, we found that quantum computing is not a strict Turing machine. Moreover, the design of quantum computing at the technological level is too difficult, and it may take a long time to be truly universal in terms of algorithms like Turing machines. I have an opinion that we don't have to worry too much about this for the next 300 years. But after 300 years, will there be a key breakthrough in quantum computing? It's hard to say, because from a reductionist perspective, our human mind must be based on some kind of physical entity.
At present, according to more and more indications,** our way of thinking is not equivalent to the way of thinking of Turing computers. **But according to our current cognition, now we have only two choices - only the classical Turing machine and the recently appeared quantum computer, but there may not be a third choice in the future.
If we have basically established that the human brain is not made of a classical Turing machine, then it may be a quantum computer. However, the ability of quantum computers to create human-like thought patterns is unclear. So we are more and more convinced that a quantum computer is not a Turing machine, and its underlying logic is different.
Talent Education in the GPT Era: Cultivate Strong Learning Ability
We create machines to help us accomplish different tasks. Therefore, from the perspective of specific career directions, it is difficult to determine which jobs will not be replaced in the future. Because for any event or algorithm that can be described, a Turing machine can execute it. Once we describe a job as a specific task, a computer can do it, it just varies in how efficiently the computer performs that task.
In fact, when we are considering what things Turing machines cannot accomplish, Turing and his mathematician Gödel pointed this out in the 1930s, but they did not attract enough attention from humans at that time. They (referring to Gödel and Turing etc.) That generation has proved that perceptual thinking and intuition are the basic tools for us humans to understand the world, and rational thinking is a tool for organizing perceptual thinking. In short,** the ability to truly perceive the world remains uniquely human and is achieved through our own perceptual perception. This is one of the cores of how we understand and distinguish humans from machines (or AI). **
Based on the difference in core competencies, for us humans or future humans, in fact, an important ability to cultivate requires strong learning ability and adaptability. Only through this learning ability can people give new solutions to new needs and turn them into their own work. It is difficult to discuss each of these aspects in detail, because learning ability cuts across many different domains. But now we have to focus on that, through education to change the way we currently educate our students.
For example, this semester I avoided giving students homework. I began to realize that there was nothing to stop them from doing their assignments with tools like GPT, and that the answers through GPT might be better than I expected, and such assignments lost their meaning. Therefore, I pay more attention to the dialogue and interaction with students in the classroom, and pay attention to their understanding of reasoning logic and process, rather than whether they can complete the homework.
In addition, throughout the semester, I hope they complete a relatively systematic project-based assignment. Today's education is advocating project-based learning and learning through participation in projects. In the process of this project, we let students understand what they are doing, instead of educating them through the previous methods of question and answer, test papers and homework. The people cultivated through project-based learning have more advantages for machines, not Just to answer the question.
In this process, there will be many issues worthy of our consideration. It is precisely because we think and understand the needs in this area that a large number of new job opportunities and new development directions will be created. Therefore, if you have to say, the difference between humans and machines may be a bigger trend that you really pay attention to in the future. To sum up, what we focus on is human's own ability and the interaction between human and machine, which is a very broad field.
The development trend of human-computer dialogue in the future: the interaction between artificial intelligence and machines
As for the conception and vision of future AI, it is difficult to accurately predict specific trends, because this may guide public opinion and affect the direction of capital investment. Some views are only for exchange and discussion.
**I think an important trend is the interaction between artificial intelligence and machines. With the rapid development of machines and humans, we need an interface or tool to connect the two to achieve better communication. Human-computer interaction will be a very important technical field. **
When looking for future trends, we should pay more attention to both human and machine, not just one. We need to think deeply about the capabilities and positioning of human beings. This is a question that requires long-term thinking.
Although we have discussed more about educational ethics and the possible future direction of human beings, from a technical perspective, human-computer interaction may be a field with great potential. We need to think about how to have a faster and more efficient way of communicating between people and machines, without requiring people to become professional expert models.
Whether human-computer interaction can attract more people to participate in a faster way and effectively manage the machine may affect and promote the faster development of the machine. Because the rapid development of machines is inevitable in the future, humans also need to clarify their own strategies and positioning. Since both human beings and machines have to co-exist on the earth, we should have a particularly harmonious, convenient and efficient way of interaction. This kind of interaction may require many new technologies to realize.
All in all, we don't want to become "machine slaves" in the future, so we must think about the positioning of human beings. In the field of education, the popularity of GPT has also raised important challenges and thinking for me: "The students cultivated by the traditional education model are more like machines or people?" "How should we learn so that we will not be replaced by AI? "These questions profoundly guide our serious discussions today. As teachers, we don't want what we are teaching students today, or producing students, to find in 10 or 20 years from now that their jobs have been replaced by computers, put them out of work, or forced to change jobs.
**Human thinking is free, creative, and communicable. Fundamentally speaking, what we need to develop is the way of cultivating innovative technical talents with lifelong learning habits. **