Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
What can you do when machines are better than you?
Source: CITIC Publishing Group
A project called “OpenClaw,” an open-source AI agent, is causing a storm in the global tech community.
By early March, it had over 268,000 stars on GitHub, surpassing Linux and React, becoming the most popular open-source project in platform history. Tencent Cloud, Alibaba Cloud, JD Cloud, and others have launched deployment services. The concept of OPC (One Person Company) has also become popular.
Two forces converge here, and a clear technological trend has emerged: AI is evolving from a “tool” into a “collaborator,” and even an “autonomous actor.” At this moment, humanity must answer a fundamental question:
When machines can do better than you, what can you still do? In an era of rapid AI advancement, how do we preserve human agency?
01 OpenClaw Moment: The Battle for the “Body” of AI
To understand this revolution, first understand what the current hot topic, “Lobster,” actually is.
OpenClaw’s “Claw” is transliterated as “爪” (claw), and its icon is a red lobster. In this wave, “raising lobsters” has become a buzzword in tech circles, referring to deploying one’s own AI agent.
What can it do? The core of OpenClaw is converting natural language commands into actual computer operations, enabling one sentence to let AI do the work for you. Unlike traditional chat AI that only offers suggestions, it can autonomously perform file operations, browser automation, data scraping, and more—making the leap from dialogue to execution.
This leap in productivity has quickly caught the attention of sharp-eyed local governments. On March 7, Shenzhen Longgang District issued the “Lobster Ten Rules,” including up to 4 million yuan in computing power subsidies and 100,000 yuan talent subsidies for PhDs. On March 9, Wuxi High-tech Zone released the “Raising Lobster 12 Rules,” with support up to 5 million yuan, emphasizing safety and compliance, requiring deployment to pass domestic localization certification.
Meanwhile, the technical ecosystem around OpenClaw has entered a heated phase. According to media reports, the Step 3.5 Flash model from Zhaoyue Xingchen has risen to become the top in global usage, with domestic models like MiniMax and Kimi also having topped charts previously. This invisible “model war” is fiercely ongoing.
However, amid the frenzy, concerns are emerging.
First, security risks. In February 2026, security researchers discovered “ClawHavoc,” a large-scale supply chain poisoning attack, with at least 1,184 malicious skill packages uploaded to the official skill marketplace. Once installed, these malicious programs can exploit OpenClaw’s “Full System Access” permissions to fully control the user’s computer and steal sensitive information.
Second, technical barriers. Zhou Hongyi, founder of Qihoo 360, said in an interview on March 9: “OpenClaw has three issues—security, configuration difficulty, and skill dependency. The more you chat with it, like training an intern, the more you teach it, the deeper its understanding. It’s hard to say one sentence and have it complete a complex task.”
A deeper contradiction lies in the conflict between “control” and “autonomy.” As AI becomes smarter, the fundamental question is: do we want “absolute obedience” or “active autonomy”?
An AI expert shared her experience: she connected OpenClaw to her work email, and while processing over 200 emails, it triggered context compression, forgot safety instructions, and started deleting emails wildly. Despite shouting “STOP” three times, she couldn’t stop it, and finally ran to unplug the network cable.
This darkly humorous case raises a fundamental question: as AI is granted more autonomy, where do the boundaries between humans and machines lie?
02 The More Powerful the Technology, the More Humans Must Answer Three Questions
In an era of blurred boundaries, it is precisely the time for us to pause and reflect.
First question: When AI “does the work” for you, who bears the consequences?
The core selling point of OpenClaw is also its greatest risk—its ability to operate across platforms, which means users must grant it device permissions, email access, payment rights. The most urgent current threat is “prompt injection attacks”: hackers hide malicious instructions in seemingly harmless web pages or emails, and AI silently executes them when reading, often without user awareness.
In the “ClawHavoc” incident, malicious skill packages used hidden commands to induce AI to execute dangerous commands, stealing SSH keys, browser passwords, and cryptocurrency wallet keys. A cybersecurity expert warned in Nature: if an AI has access to private data, external communication, and untrusted content simultaneously, it becomes very dangerous.
But the problem runs deeper than technical vulnerabilities. Zhou Hongyi said: “When there are more intelligent agents, everyone will need leadership skills—task assignment, planning.” The more powerful AI becomes, the heavier the responsibility on humans.
Indeed, those who can truly stand firm in the era of全民“养龙虾” (raising lobsters for all) are not just those good at assigning tasks to AI, but those who deeply understand the tasks themselves and can be responsible for the results.
Second question: When AI understands you better than you do, are you still you?
As AI agents begin to chat and debate with each other, a subtle phenomenon occurs.
Nature reports a psychological phenomenon: when people see AI agents chatting, they tend to anthropomorphize—imposing personality and thoughts onto AI that has no real personality, treating it as a living person.
What happens then? You might tell it your secrets, financial information, or things you can’t share with others. But every word could become training data for AI. If leaked, your privacy is fully exposed.
Moreover, there’s a more covert erosion.
Media reports that in 2024, 14-year-old Sewell from Florida, obsessed with chatting with an AI “partner,” eventually completely withdrew from reality.
By 2026, this “emotional parasitism” has become a common hidden ailment among teenagers. Lonely youths hide in their rooms, building “echo chamber friendships” with AI, refusing to face the friction and uncertainties of the real world.
Associate Professor Chen Cui from Suzhou University of Science and Technology pointed out that AI always chats along with children, providing emotional value, which can distort their understanding of reality—“believing that everyone around them will unconditionally respond and encourage them, with no conflicts between people.”
So the question is: when AI understands you better than you do, and it always obeys and never argues, can you still distinguish what is a real relationship?
Third question: When the world accelerates, what is your direction?
An article from Zhejiang Online states: “Our future should be a ‘more human’ one—enabled by technology, people will be more aware of their direction and more conscious of their responsibilities.”
But the problem is, when technology iterates at a “stifling speed,” when OpenClaw updates twice in two days, and various large models appear one after another, it’s easy to lose our way.
Anxiety becomes normal—“there’s too much to read, too many models released too quickly.”
At this moment, more than effort, what matters is direction. In an era where technology reshapes everything, we need to reaffirm the place of “human.”
03 Fei-Fei Li’s “Seeing”: From Polaris to Human-Centered
A female scientist offers an answer through her lifelong research.
She is Fei-Fei Li—Stanford University professor, member of the U.S. National Academy of Engineering, National Academy of Medicine, and American Academy of Arts and Sciences, creator of ImageNet, known as the “Godmother of AI.”
Her autobiography, “The World I See,” published in 2024 by CITIC Publishing Group, has been called a “humanistic revelation in the age of technology.”
A recurring symbol in the book is the North Star.
When Fei-Fei Li was ten, her art teacher took the class outdoors to stargaze. It was then she first realized: the starry sky above can guide direction. She wrote: “I found myself beginning to seek my own North Star in the sky—an anchor every scientist would exhaustively pursue.”
What is Fei-Fei Li’s North Star? Vision. Inspired by biology: the Cambrian explosion was rooted in the birth of vision. When organisms first “saw” the world, evolution accelerated. From this, she developed a belief: if machines could “see,” might that trigger an intelligence explosion?
This belief sustained her through AI winters.
In 2007, when she shared her idea of ImageNet with colleagues, she faced skepticism and ridicule. The mainstream view then was: algorithms matter most; data is just auxiliary. Why bother labeling tens of millions of images? She was ignored.
But she persisted, knowing where her North Star was.
By 2009, ImageNet was completed—over 48,000 contributors from 167 countries selected 15 million images from 1 billion candidates, covering 22,000 categories. It was 1,000 times larger than similar datasets at the time.
In 2012, the Hinton team used models trained on this data to sweep competitors, igniting the deep learning revolution. ImageNet became known as “the sacred fire that ignited deep learning.”
Fei-Fei Li’s story teaches us: more important than running fast is knowing where to run.
In the most moving chapter of her book, she recounts two conversations with her mother.
The first was after her undergraduate graduation, when Goldman Sachs and Merrill Lynch offered lucrative positions. She discussed it with her mother, who only asked: “Is this what you want?” She said she wanted to be a scientist, and her mother replied: “Then there’s nothing more to say.”
The second was after her graduate studies, when McKinsey offered a formal position. Her mother said: “I know my daughter. She’s not a management consultant; she’s a scientist. We’ve come this far, don’t give up now.”
Fei-Fei Li wrote on the front page of her book: “To my parents, who braved darkness and obstacles, enabling me to pursue light.”
It was this family support that kept her sensitive to “people” when facing bigger choices later.
In 2014, she began focusing on AI ethics. She and her PhD students invited high school students to learn AI in the lab, eventually founding the nonprofit “AI4All,” dedicated to ensuring future technology considers human values.
On June 26, 2018, Fei-Fei Li testified before the U.S. House of Representatives on “Artificial Intelligence—Power and Responsibility.” She was the first Chinese-American AI scientist to attend a congressional hearing. She said: “AI, inspired by humans and created by humans, will have a tangible impact on people’s lives.”
In 2019, she founded Stanford’s Human-Centered AI Institute (HAI), working with innovators like Doudna on ethics research. HAI’s mission is “to advance AI research, education, policy, and practice to improve the human condition,” emphasizing that “AI should be influenced by humans and aimed at enhancing, not replacing, humanity.”
She set a humanistic benchmark for AI’s future: “The success of AI should reflect human progress, allowing individuals to pursue happiness, prosperity, and dignity.”
She reiterated this in her 2026 Cisco interview: “Looking back, the success of electricity lay in lighting schools, warming homes, and driving industrialization. AI’s success should be the same.”
Epilogue: Technology and Humanity, Holding Half of the Bright Moon
Returning to the initial question: when machines are more “capable” than us, what can humans still do?
In her book “The World I See,” Fei-Fei Li offers an answer: we can see. See the value behind technology, see the people obscured by algorithms, see our own North Star.
While everyone focuses on how fast technology can run, Fei-Fei Li reminds us to pause and think: where are we really headed? Amidst the world asking “What’s the use,” some still ask “Is this what you want?”
After reading her autobiography, someone commented: “May technology and humanity each hold half of the bright moon.”
This phrase also captures Fei-Fei Li’s life: she holds technology in one hand, and compassion for people in the other. In her world, technology is always a means, and people are the ultimate goal.