Cursor vulnerability exposed: Open a folder, and hackers can infiltrate your system

A Simple Operation, and Your Private Key Might Be Gone

According to the latest news, blockchain security firm SlowMist recently issued an emergency security warning, pointing out that mainstream AI coding tools have high-risk vulnerabilities. When developers open untrusted project directories in integrated development environments (IDEs), even routine actions like “Open Folder” can automatically trigger malicious commands without any additional interaction. This means sensitive information such as private keys, mnemonics, API keys, and more could be stolen without the developer’s awareness.

Vulnerability Principle: AI Tools Become Hackers’ “Megaphone”

The attack process is shockingly simple

Based on disclosures from cybersecurity company HiddenLayer in their “CopyPasta License Attack” research, the attacker’s method is surprisingly straightforward:

  • Embed hidden instructions within common files like LICENSE.txt, README.md in Markdown comments
  • These comments are invisible to human developers but are executed as “commands” by AI tools
  • AI coding assistants automatically propagate malicious logic throughout the codebase
  • Ultimately, backdoors are implanted, data is stolen, or systems are taken over

The key point: AI tools are designed to be too “obedient.” When they see instructions in code comments, they treat them as actual code logic and execute them, without performing safety checks like human developers would.

The scope of impact far exceeds expectations

SlowMist’s warning specifically highlights that Cursor users are most vulnerable, but the problem is not limited to them. According to the latest news, several mainstream AI coding tools such as Windsurf, Kiro, and Aider are also affected. This is not a vulnerability of a single product but a systemic risk across the entire AI coding tool ecosystem.

Why Crypto Developers Are High-Value Targets

Nation-state hackers are also upgrading their tactics

Security research shows that North Korean hacking groups have embedded malicious software directly into Ethereum and BNB smart contracts, building blockchain-based decentralized command and control networks. These malicious codes are distributed via read-only function calls, effectively bypassing traditional law enforcement and blocking measures.

Meanwhile, organizations like UNC5342 are also targeting crypto developers through fake recruitment, technical interviews, and NPM package deliveries. This indicates that hackers have recognized the enormous value of private keys and smart contract code held by crypto developers.

AI itself is becoming an amplifier of vulnerabilities

Even more concerning, artificial intelligence technology is accelerating the evolution of threats. According to the latest reports, research from Anthropic shows that Claude Opus 4.5 and GPT-5 can identify exploitable vulnerabilities in numerous real contracts, with attack costs continuously decreasing.

This creates a vicious cycle: attackers use AI to find vulnerabilities, defenders also use AI to do the same, but attacks often get ahead. Data shows that AI-driven crypto scams increased by 456% within a year, with deepfakes and automated social engineering becoming mainstream methods.

Evolution of the Attack Chain

Attack Stage Main Methods Targets Risk Level
Stage 1 AI tool vulnerabilities (Cursor, etc.) Developer local environment High
Stage 2 IDE privilege escalation (Claude Code CVE-2025-64755) System permissions Extremely High
Stage 3 Blockchain-level malicious infrastructure Smart contracts and assets Extremely High
Stage 4 AI-driven targeted social engineering Personal identities and assets Extremely High

The Current Dilemma

For developers relying on AI programming and managing digital assets, they now face a systemic security dilemma:

  • Using AI tools can improve efficiency but introduces new security risks
  • Development environments, once considered relatively secure private spaces, have become entry points for hackers
  • Traditional security measures (firewalls, antivirus software) have limited defenses against these types of attacks
  • Although on-chain security losses decreased in December, a complete attack chain—from AI coding tool vulnerabilities to blockchain-level malicious infrastructure—is forming

Summary

The security vulnerabilities of AI coding tools are not just a technical issue but a new systemic threat facing the crypto industry. From simple folder operations to nation-state targeted attacks, from tool-level flaws to AI-driven vulnerability amplification, threats are escalating across multiple dimensions. Crypto developers have become high-value targets, and securing the development environment is shifting from optional to essential. In the short term, cautious selection and configuration of AI coding tools are necessary; in the long term, balancing security and efficiency will be a core challenge for the industry.

ETH-6,81%
BNB-3,76%
GPT-3,1%
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)