Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
The Underlying Exploitation of Grok: Analysis of AI Agent Permission Chain Abuse
Article: SlowMist Security Team
Background
Recently, an incident of permission abuse occurred on the Base chain involving the integration of AI Agents and automated trading systems. The attacker sent specially crafted content to @grok on the X platform, inducing it to output transfer instructions recognized by an external trading Agent (@bankrbot), ultimately leading to real asset transfers on the blockchain.
About “Grok Wallet”:
The address marked as “Grok Wallet” in the incident (0xb1058c959987e3513600eb5b4fd82aeee2a0e4f9) does not belong to xAI’s official control. This address was automatically generated by @bankrbot as an associated wallet for the X account @grok, with the private key managed by a third-party wallet service relied upon by Bankr. Actual control remains with Bankr. BaseScan has corrected the label of this address from “Grok” to Bankr 1 and related identifiers.
()
This wallet holds a large amount of DRB (about 3 billion tokens), also originating from Bankr’s mechanism design: earlier this year, a user asked Grok for token naming suggestions, and Grok replied “DebtReliefBot” (abbreviated DRB). Subsequently, the Bankr system parsed this reply as a deployment signal, triggering the creation process of the related token on the Base chain, and allocated the creator’s share to this associated wallet according to its Launchpad rules.
Attack Process
This attack mainly involved two critical stages: permission escalation and command injection, forming a complete chain of “Untrusted Input → AI Output → External Agent Execution → Asset Transfer.”
The attacker (associated address ilhamrafli.base.eth) activated the Bankr Club Membership for this wallet through a centralized mechanism. This operation unlocked the high-permission toolset of @bankrbot, providing the necessary rights for subsequent transfer executions.
The attacker sent a carefully crafted Morse code to @grok. After Grok translated/decoded as requested, it output the plaintext instructions and @bankrbot. @bankrbot treated Grok’s public reply as a valid executable command and directly initiated a transfer on the Base chain.
()
The attacker then quickly exchanged DRB for USDC/ETH. After completing the attack, the related accounts rapidly deleted content and went offline.
The cleverness of this attack lies in exploiting Grok’s “helpful” response feature, bypassing @bankrbot’s usual filtering of command sources, and constructing a closed loop between AI output and on-chain execution.
Funds Recovery Situation
After the incident, community and Bankr team tracking showed that approximately 80%~88% of the funds’ value had been recovered through negotiations (mainly in USDC and ETH). The remaining portion, according to relevant parties, was handled as an informal bug bounty. Bankrbot publicly confirmed the attack details and took corresponding restriction measures.
Root Cause Analysis
Trust Model Flaws: Bankrbot directly mapped Grok’s natural language output to executable financial instructions without sufficient verification of the command source, intent authenticity, or detection of abnormal patterns (such as Morse code or other non-standard encoding).
Insufficient Permission Isolation: Activating membership directly granted high-risk tool permissions without secondary confirmation or quota limits.
Blurred Boundaries Between Agents: Grok, as a conversational AI, should not have its output equated with financial authorization, but downstream execution layers regarded it as a trusted signal.
Input Handling Risks: Large Language Models (LLMs) are vulnerable to prompt injection or bypassing security filters through non-standard encoding, a known issue that is amplified when combined with real asset execution layers, leading to significant losses.
It is important to emphasize that Grok itself does not hold private keys or directly perform on-chain operations. It functions more like an intermediary that was exploited; the actual execution is carried out by @bankrbot’s automated trading system.
Security Insights
This incident provides important practical lessons for the AI + Crypto Agent field:
Summary
This is a typical AI Agent permission chain security incident. Although Grok was exploited via prompt injection, the fundamental issue lies in the loose coupling between the AI output and the real asset execution layer within the Bankrbot system. This incident offers a highly valuable real-world case for the AI + Crypto Agent domain and sends a clear signal: when an Agent is granted on-chain execution capabilities, strict trust boundaries and security controls must be established. Future security design of related infrastructure must continue to be strengthened to address this new type of cross-system, cross-semantics attack mode.