Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Boao Forum Discusses the First Year of AI Intelligent Agents: Behind the Explosion, Risks and Governance Become New Focus
The President of the China Academy of Information and Communications Technology, Yu Xiaohui, said that enterprises and ordinary consumers are willing to try new technologies, but they also need to build greater awareness of, and respect for, new technologies.
“This year, AI has seen three major trends. First, it is moving from generative AI to intelligent agents. Consumer-facing intelligent agents are seeing explosive growth—this is the first year of intelligent agents. Second, intelligence is shifting from informational intelligence to physical intelligence and biological intelligence. Robots and autonomous vehicles fall under physical AI. Third, AI is moving from AI to ‘AI+’, meaning it is entering every industry. ” On March 25, at the Boao Asia Forum ‘AI+’ Digital Intelligence Empowering Industry Upgrading sub-forum, Zhang Yaqin, an academician of the Chinese Academy of Engineering, a Chair Professor at Tsinghua University, and Director of the Tsinghua Institute for Intelligent Industry, made the latest judgment.
It is precisely the intelligent agents represented by OpenClaw ‘Lobster’ that have sparked the first wave of AI applications in 2026. Not only has it opened the door to the first year of intelligent agents, it also makes it more possible for AI to go deeper into every single one of thousands of industries.
Some people in the industry have observed that domestic users have an especially strong enthusiasm for new AI applications. Yu Xiaohui, President of the China Academy of Information and Communications Technology, remarked at the forum that China has a very high level of acceptance of digital technologies, including AI. Enterprises and ordinary consumers are also curious about new technologies and are willing to try them, which brings very strong vitality to China’s digital economy and AI development.
The deepening use of AI also provides new opportunities for discussions about AI’s impacts and risks. In the view of multiple experts from academia and industry, people can now already see changes that AI brings to technological development paradigms; at the same time, new AI risks are also emerging, and more new issues need to be discussed.
An observation by Jiang Xiaojun, a professor at the Chinese Academy of Social Sciences University and the former Deputy Secretary-General of the State Council, is that traditional innovation involves scientists discovering scientific laws, technology departments developing advanced technologies, and industry carrying out transformation. However, as the elements involved in discovering new laws and technologies shift toward the convergence of algorithms, data, platforms, and massive investment, the industry becomes the main force in front-end discovery and frontier technology R&D. “Before 2014, about 60% of the most frontier large models were developed by universities. After 2014, 90% were developed by platform-based big enterprises. In this era, the role of industry and enterprises in innovation is beyond comparison with any other time in the past.”
But an AI industry driven primarily by industry and enterprises, while generating innovation and attracting users, also gives rise to new AI risks.
Taking “Lobster” as an example, Yu Xiaohui also said that, to some extent, people in China may have underestimated the security challenges that AI could bring. When using “Lobster,” people rarely consider whether there are uncontrollable risks. “When people use ‘Lobster,’ they rarely consider whether granting open permissions could lead to uncontrollable risks. In February, we (the China Academy of Information and Communications Technology) monitored these risks, but we did not release them to the public, nor did we expect the social response to be so intense. When it became widely adopted nationwide, only then did we release the relevant information to the public.” Yu Xiaohui said that future work will also consider whether, when technologies have value but also potential risks, they should disclose warnings to society earlier.
“We also need to establish greater awareness and reverence for new technologies, and we should have basic safety considerations. For an institution or enterprise, future safety governance capabilities will become especially important. AI suppliers have a huge responsibility to do safety hardening.” Yu Xiaohui said that the China Academy of Information and Communications Technology is promoting domestic leading AI companies to make safety commitments and disclosures.
Other participants also saw other dimensions of AI risk. Sam Daws, Senior Adviser to the AI Governance Initiative at the University of Oxford’s Martin School and Director of Multilateral AI, mentioned three categories of AI risks, including risks that criminals use AI for cyberattacks, risks that AI begins to run out of control leading to accidents, and systemic or social risks such as workers being replaced. “What consequences would result if intelligent agents such as ‘Lobster’ start interacting with each other?” When discussing accident risks, he gave examples, saying it is worth thinking about.
Zhang Yaqin believes that, in addition to AI content needing to have labels, intelligent agents should also have trackable entities, with responsibility borne by the entities. In addition, it is necessary to stipulate that intelligent agents should not be able to generate or replicate themselves. Zhang Yaqin also mentioned another major risk: when 65% of the content on the internet is generated by AI, and when the contaminated content is used to train models, the models are also contaminated. Solving this problem is an extremely urgent task. These risks require joint efforts from scientists, engineers, entrepreneurs, and policy makers.
Jiang Xiaojun said that international organizations have previously proposed various AI governance requirements, but without a social science perspective to measure whether they are achieved. “I propose three standards of ‘good.’ One is reasonableness. We can assess whether AI is good by whether economic growth occurs, whether social welfare increases, and whether the distribution is fair. (It can be seen that) economic growth and social welfare have increased, but at this stage, social fairness problems have not been resolved well, and wealth may become concentrated among a small number of innovative individuals.”
At the Boao Sina Finance Night on the same day, Wu Xiaoqiu, former Vice President of Renmin University of China, Director of the National Financial Research Institute, and a nationally rated first-class professor, also mentioned risks that AI may bring of wealth concentration. He said that as high-tech enterprises centered on AI develop, the degree of concentration of social wealth is accelerating. Domestically, it is becoming commonplace for some people with assets of billions to emerge within a few months—this was unimaginable before. “On the one hand, we cannot take measures to restrain it simply because wealth has a certain trend of concentration. Policy should still incentivize social leaders and innovators to move forward. But at the same time, it’s also not okay if the gap in wealth becomes too large. In such an era, the functions of secondary distribution and transfer payments should be strengthened, and government revenue may need to be used more for these purposes.”
Jiang Xiaojun also mentioned that the standards for AI’s “goodness” also include appropriateness and consensus. In terms of appropriateness, AI brings many uses to consumers. As for consensus, when scientific models move from previously discovering natural laws to creating new things and creating social structures, people need to judge whether this is what people need. Everyone should have the willingness to express themselves, forming public standards for judgment.
A massive amount of information and precise analysis—right on the Sina Finance app