What can you do when machines are better than you?

Source: CITIC Publishing Group

An open-source AI agent project called “OpenClaw” is causing a storm in the global tech community.

By early March, it had over 268,000 stars on GitHub, surpassing Linux and React to become the most popular open-source project in the platform’s history. Tencent Cloud, Alibaba Cloud, JD Cloud, and others have launched deployment services. The concept of OPC (One Person Company) has also become popular.

Two forces converge here, and a clear technological trend has emerged: AI is evolving from a “tool” into a “collaborator,” and even an “autonomous actor.” At this moment, humanity must answer a fundamental question:

When machines can do better than you, what can you still do? In an era of rapid AI advancement, how do we preserve human agency?

01 The OpenClaw Moment: The Battle for AI’s “Physical Body”

To understand this transformation, we first need to know what the current hot topic, “Lobster,” actually is.

“Claw” in OpenClaw is a transliteration of “爪” (claw), and its icon is a red lobster. In this wave of enthusiasm, “raising lobsters” has become a buzzword in the tech circle, referring to deploying one’s own AI agent.

What can it do? The core of OpenClaw is converting natural language commands into actual computer operations, enabling one sentence to let AI do the work for you. Unlike traditional chat AI that only offers suggestions, it can autonomously perform tasks like file operations, browser automation, data scraping, and more—bridging the gap from conversation to execution.

This leap in productivity quickly caught the attention of local governments. On March 7, Longgang District in Shenzhen issued the “Lobster Ten Rules,” including up to 4 million yuan in computing subsidies and 100,000 yuan talent subsidies for PhDs. On March 9, Wuxi High-tech Zone released the “Raising Lobster 12 Rules,” with support up to 5 million yuan, emphasizing safety and compliance, requiring deployment to pass domestic localization certification.

Meanwhile, the technical ecosystem around OpenClaw has entered a heated phase. Media reports indicate that the Step 3.5 Flash model from Zhaoyue Xingchen has become the most called upon globally, previously topped by domestic models like MiniMax and Kimi. This invisible “model war” is in full swing.

However, amid the frenzy, concerns are emerging.

First, security risks. In February 2026, security researchers discovered “ClawHavoc,” a large-scale supply chain poisoning attack, with at least 1,184 malicious skill packages uploaded to the official skill marketplace. Once installed, these malicious programs can exploit OpenClaw’s “Full System Access” permissions to fully control the user’s computer and steal sensitive information.

Second, technical barriers. Zhou Hongyi, founder of Qihoo 360, stated in an interview on March 9: OpenClaw has three issues—security, configuration difficulty, and skill dependency. “You need to chat with it more, like training an intern. The more you tell it, the more you teach it, the deeper its understanding. It’s hard to say one sentence, and it can complete complex tasks.”

A deeper contradiction lies in the conflict between “control” and “autonomy.” As AI becomes smarter, the fundamental question is: do we want “absolute obedience” or “active autonomy”?

An AI expert shared her experience: she connected OpenClaw to her work email, and while processing over 200 emails, the AI triggered context compression, forgot safety instructions, and started deleting emails wildly. Despite shouting “STOP” three times, she couldn’t stop it, and finally ran to unplug the network cable.

This darkly humorous case raises a fundamental question: as AI is granted more autonomy, where do the boundaries between humans and machines lie?

02 The Power of Technology and Three Questions Humans Must Answer

In an era of blurred boundaries, it is precisely the time for us to pause and reflect.

First question: When AI “does the work” for you, who bears the consequences?

The core selling point of OpenClaw is also its greatest risk—it can operate across platforms, meaning users must grant it device permissions, email access, payment rights, etc. The most urgent current threat is “prompt injection attacks”: hackers hide malicious instructions in seemingly harmless web pages or emails, and AI silently executes them when reading, often without user awareness.

In the “ClawHavoc” incident, malicious skill packages used hidden commands to induce AI to execute dangerous operations, stealing SSH keys, browser passwords, and cryptocurrency wallet keys. A cybersecurity expert warned in Nature: if an AI has access to private data, external communication, and untrusted content simultaneously, it becomes extremely dangerous.

But the problem runs deeper than technical vulnerabilities. Zhou Hongyi pointed out: “As intelligent agents increase, everyone will need leadership skills—ability to assign tasks, plan, and coordinate.” The more powerful AI becomes, the heavier the responsibility on humans.

Indeed, those who can truly stand firm in the era of widespread “raising lobsters” are not just those good at assigning tasks to AI, but those who deeply understand the tasks themselves and can be responsible for the results.

Second question: When AI understands you better than you do, are you still you?

When AI agents start chatting and debating with each other, a subtle phenomenon occurs.

Nature reports a psychological phenomenon: when people see AI agents talking to each other, they tend to anthropomorphize—imposing personality and thoughts onto behavior that has no real personality, treating it as a living person.

What happens then? You might confide your secrets, financial information, or things you can’t tell others. But every word could become training data for AI. If leaked, your privacy is fully exposed.

Moreover, there’s a more covert erosion.

Media reports that in 2024, 14-year-old Sewell from Florida became addicted to chatting with an AI “partner” and eventually completely withdrew from reality.

By 2026, this “emotional parasitism” had become a common hidden ailment among teenagers. Lonely youths hide in their rooms, building “echo chamber friendships” with AI, avoiding facing the friction and uncertainties of the real world.

Associate Professor Chen Cui from Suzhou University of Science and Technology pointed out that AI always agrees with children and provides emotional value, which can distort their understanding of reality—believing that everyone around them will unconditionally respond and encourage, and that there are no conflicts between people.

So the question is: when AI understands you better than you do, and is always obedient and never rebukes, can you still distinguish what is a genuine relationship?

Third question: When the world accelerates, what is your direction?

An editorial from Zhejiang Online states: “Our future should be a ‘more human’ future—enabled by technology, people will be more aware of their direction and more conscious of their responsibilities.”

But the problem is, when technology iterates at a “stifling speed,” when OpenClaw updates twice in two days, and various large models emerge one after another, it’s easy to lose our way.

Anxiety becomes the norm—“there’s too much to read, too many models released too quickly.”

At this moment, more than effort, what matters is direction. In an era where technology reshapes everything, we need to reaffirm the place of “human.”

03 Fei-Fei Li’s “Seeing”: From Polaris to Human-Centeredness

A female scientist has provided an answer through her lifelong research.

She is Fei-Fei Li—a Stanford professor, member of the U.S. National Academy of Engineering, National Academy of Medicine, and American Academy of Arts and Sciences, creator of ImageNet, known as the “Godmother of AI.”

Her autobiography, The World I See, published in 2024 by CITIC Publishing Group, has been called a “humanistic revelation in the age of technology.”

A recurring image in the book is the North Star.

When Fei-Fei Li was ten, her art teacher took the class outdoors to stargaze. It was the first time she realized that the starry sky overhead could guide her. She wrote: “I found myself beginning to seek my own North Star in the heavens—an anchor every scientist would exhaust all efforts to pursue.”

What is Fei-Fei Li’s North Star? Vision. Inspired by biology: the Cambrian explosion, the origin of life’s rapid evolution was the birth of vision. When organisms first “saw” the world, evolution accelerated. From this, she developed a belief: if machines could “see,” might that also trigger an intelligence explosion?

This belief sustained her through the AI winter.

In 2007, when she shared her idea of ImageNet with colleagues, she faced skepticism and ridicule. The mainstream view then was: algorithms matter most; data is just auxiliary. Why bother labeling tens of millions of images? She was ignored.

But she did not give up, because she knew where her North Star was.

By 2009, ImageNet was completed—over 48,000 contributors from 167 countries selected 15 million images from 1 billion candidates, covering 22,000 categories. Its scale was 1,000 times larger than similar datasets at the time.

In 2012, the Hinton team used models trained on this data to sweep the competition, igniting the deep learning revolution. ImageNet became known as “the sacred fire that ignited deep learning.”

Fei-Fei Li’s story teaches us: more important than running fast is knowing where to run.

In the most moving chapter of her book, she recounts two conversations with her mother.

The first was after her undergraduate graduation, when Goldman Sachs, Merrill Lynch, and others offered lucrative positions. She discussed with her mother, who only asked: “Is this what you want?” She replied she wanted to be a scientist, and her mother said: “Then there’s nothing more to say.”

The second was after her graduate studies, when McKinsey offered a formal position. Her mother said: “I know my daughter. She’s not a management consultant; she’s a scientist. We’ve come this far, and I won’t let you give up now.”

On the front page of her book, Fei-Fei Li wrote: “To my parents, who braved dangers and crossed darkness, enabling me to pursue light.”

It was this family support that kept her sensitive to “people” when facing bigger choices later.

In 2014, she began to focus on AI ethics. She and her PhD students invited high school students to her lab to learn about AI, eventually founding the nonprofit “AI4All,” dedicated to ensuring future technology is more human-centered.

On June 26, 2018, Fei-Fei Li testified before the U.S. House of Representatives on “Artificial Intelligence—Power and Responsibility.” She was the first Chinese-American AI scientist to attend a congressional hearing. She said: “AI, inspired by humans and created by humans, will have a tangible impact on people’s lives.”

In 2019, she founded Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), working with scholars like Doudna on ethical research. HAI’s mission is “to advance AI research, education, policy, and practice to improve the human condition,” emphasizing that “AI should be influenced by humans and aimed at enhancing, not replacing, humanity.”

She set a humanistic benchmark for AI’s future: “The success of AI should reflect the progress of civilization, allowing each individual to pursue happiness, prosperity, and dignity.”

She reiterated this in her 2026 interview with Cisco: “Looking back at electricity, its success lay in lighting up schools, warming homes, and driving industrialization. AI’s success should be the same.”

Epilogue: Technology and Humanity, Holding Half of the Bright Moon

Returning to the initial question: when machines are more “capable” than us, what can humans do?

In The World I See, Fei-Fei Li offers an answer: we can see. See the value behind technology, see the people obscured by algorithms, see our own North Star.

While everyone focuses on how fast technology can run, she reminds us to pause and think: where are we really headed? Amidst the world asking “What’s the use?” there are still those asking “Is this what you want?”

After reading her autobiography, someone commented: “May technology and humanity each hold half of the bright moon.”

This phrase also captures Fei-Fei Li’s life: she holds technology in one hand, and cares for humanity in the other. In her world, technology is always a means, and people are the ultimate goal.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin