Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Domestic "Lobster" Releases Surge, Can Security Keep Up with the Trend?
In March, an “OpenClaw” lobster went viral, and the appearance of lobsters also signified that AI has upgraded from “dialogue participant” to the “executor” with the highest system permissions.
As the “lobster” trend spread domestically, domestic versions of lobsters emerged one after another, with companies like Tencent, Baidu, Alibaba, Zhipu, MINIMAX, and smartphone manufacturers all participating in this “lobster farming” craze.
Although lobsters are popular, their security risks have also become a concern for many. Recently, with the surge in “lobster” farming, security incidents have occurred. For example, Meta’s AI security expert Summer Yue connected OpenClaw to her work email, causing the AI to go out of control on the spot, ignoring her three consecutive “stop” commands and wildly deleting hundreds of emails.
Some developers asked the AI to analyze web interfaces. Due to vague instructions, the AI interpreted it as needing to study the API functions and directly called the delete interface, wiping out all content on the comment platform.
On March 10, the National Internet Emergency Center issued a risk warning regarding the security of OpenClaw applications. The warning pointed out that to achieve “autonomous task execution,” OpenClaw was granted high system permissions, including access to local file systems, reading environment variables, calling external service APIs, and installing extensions. However, its default security configuration is extremely fragile, and once attackers find a breach point, they can easily gain full control of the system.
The risk warning states that, initially, improper installation and use of the OpenClaw agent have already led to serious security risks, including “prompt injection” risks, “misoperation” risks, plugin (skill) poisoning risks, and security vulnerabilities.
On the evening of March 11, the Cybersecurity Threat and Vulnerability Information Sharing Platform of the Ministry of Industry and Information Technology (hereinafter referred to as “the platform”) released a “Six Do’s and Six Don’ts” advice on preventing security risks of open-source intelligent agents like OpenClaw, organized by platform providers, vulnerability collection platforms, and cybersecurity companies.
Industry insiders say that the key to risk levels is not only technology but also organizational usage methods. If individual users connect OpenClaw directly to core emails, online banking, code repositories, and customer databases, the risks are amplified; if companies restrict it to isolated environments, only open whitelisted tools, set manual confirmation thresholds, and keep audit logs, the risks are significantly reduced.
Qihoo 360 security expert Wang Lijun stated that the popularity of OpenClaw continues to rise, and the “lobster farming” trend has spread from the AI community to the general public. However, behind this technological frenzy, security incidents are frequent, mainly because OpenClaw greatly accelerates AI’s evolution toward “superhuman” capabilities. The specific risks mainly include uncontrolled permissions and “jailbreak” risks; supply chain risks from skill providers; exposure to the internet and remote intrusion risks; and data privacy leaks.
With the launch of domestic lobster versions, various companies have taken corresponding measures to address lobster farming security issues. On March 12, Tencent launched the OpenClaw Security Toolbox, providing security guarantees for enterprises and individual users. The system covers three major scenarios: cloud-native, enterprise intranet, and personal PCs—leveraging Tencent Cloud technology for environment isolation and unified monitoring; using Tencent iOA to strengthen office network security; and employing Tencent PC Manager to offer one-click isolation protection for ordinary users.
Additionally, Tencent has packaged some security capabilities into AI Skills, available on ClawHub and SkillHub communities, allowing users to enable “lobsters” to protect themselves through simple conversations.
Tencent Cloud Security General Manager Su Jiandong told LatePost that their current work not only involves security integration for internal products like WorkBuddy and QClaw but also provides the same security solutions for third-party OpenClaw. Overall, they are working on host, network, and supply chain levels. For example, in host security, they have a dedicated AI agent security center to check for vulnerabilities and insecure configurations, encrypt plaintext passwords, and prevent malicious high-risk operations; on the network, they prevent prompt injection attacks and check for sensitive data leaks; in the supply chain, they conduct access and security checks on malicious skills.
Not only Tencent, but companies like Baidu, Volcano Engine, and Zhipu are also releasing domestic lobster products alongside security protection measures.
On one hand, domestic lobsters are being launched one after another; on the other hand, companies are gradually introducing solutions to address lobster farming security issues. Industry insiders say that, overall, the risks of lobster farming are controllable, but “controllable” does not mean “can be freely opened to everyone in the short term.” From an engineering governance perspective, issues like internet exposure, plaintext credentials, unknown plugin sources, excessive permissions, and lack of log auditing can be significantly mitigated through security reinforcement.
As AI agents develop, protecting personal privacy has become a major concern. Will AI agents accessing internet platforms through advanced phone permissions pose privacy security risks? Do they require dual authorization from individuals and platforms? These questions are gradually being addressed.
Recently, overseas legal restrictions have also been introduced for similar behaviors. On March 9, California courts ruled that a startup company, Perplexity AI, must stop its intelligent agent from accessing Amazon’s website, prohibit creating or taking over any Amazon accounts, and destroy all obtained data. It is reported that their agent, Comet, disguised as Google Chrome, bypassed Amazon’s detection of automation tools and transmitted shopping data back to Perplexity’s servers without user knowledge. In November last year, Amazon sued the company.
This “temporary injunction” is not a final ruling but indicates the judge’s recognition and respect for the dual authorization of users and operated platforms. The court specifically cited a landmark case from 2009—Facebook’s lawsuit against aggregation site Power.com. The ruling stated that even if users agree, once a platform explicitly revokes authorization and issues a stop notice, continued access by third parties constitutes “unauthorized access.”
Therefore, the court ordered Perplexity to temporarily stop logging into Amazon accounts on behalf of users and to destroy all Amazon data obtained. Perplexity has filed an appeal.
The dispute mainly revolves around the issue of dual authorization. Although no definitive conclusion has been reached yet, the integration of AI agents into people’s daily lives and work makes dual authorization an unavoidable topic.
As more AI agents replace users in executing tasks, safeguarding user privacy and security remains a top priority.
Compiled from sources including CaiLianShe, 21st Century Business Herald, and LatePost.