In 2025, losses from hacking attacks in the crypto industry hit a record high, with human error becoming the biggest threat source

robot
Abstract generation in progress

In 2025, the cryptocurrency industry experienced its most severe cybersecurity crisis on record. Surprisingly, the main culprit was not complex smart contract vulnerabilities but seemingly basic human errors. Immunefi CEO Mitchell Amador pointed out in a recent industry analysis that, despite record-breaking hacker attack losses, most risks stem from traditional Web2-style scams such as password leaks and social engineering.

Web2-style human errors become the main entry point for hackers

In recent years, the industry has invested heavily in strengthening on-chain code security, and these efforts have yielded significant results. As code becomes increasingly difficult to exploit, attackers’ targets have quietly shifted—they no longer focus on technical flaws in smart contracts but instead target the “human” weakest link. Traditional Web2 attack methods like password leaks, phishing emails, and social engineering have become the most effective intrusion methods in the crypto space.

Amador believes this shift actually signals a positive development: On-chain security defenses are continuously improving. With enhanced code-level protections, 2026 is expected to be the best year yet for on-chain security. However, this also means attackers will be forced to evolve their strategies, turning to more sophisticated and covert social engineering and AI-assisted fraud techniques.

Fraud losses soar, AI-driven attacks reach new profit heights

Chainalysis’s annual report further confirms this trend. Data shows that in 2025, losses from scams and frauds in crypto assets reached approximately $17 billion, highlighting the severity of the issue. Even more concerning is that impersonation scams increased by an astonishing 1,400% compared to the previous year, becoming the most efficient form of fraud.

AI’s involvement complicates matters further. The report indicates that AI-driven scams generate profits 450% higher than traditional methods, attracting more malicious actors to AI-assisted fraud. Driven by high returns, bad actors are rapidly iterating their scam tools, while defenders face increasing pressure to upgrade their defenses.

Industry protection adoption remains low, security risks persist

Worryingly, despite escalating threats, the adoption rate of protective tools in the industry remains very low. Amador pointed out that over 90% of projects still have critical vulnerabilities that can be exploited. Even more troubling is that awareness and deployment of protective measures lag behind the evolving threats: less than 1% of industry participants have deployed firewalls, and fewer than 10% use AI detection tools. This means most projects are still operating in a “naked” state.

Future security challenges will become more multidimensional. AI technology is simultaneously changing the pace of both offense and defense—defenders can leverage AI to improve detection, but attackers are also using AI to enhance the precision of their scams. More challenging still, with the rise of on-chain AI agents and autonomous decision-making systems, security for these systems will become a core focus in the next cycle. As attack methods evolve, so too must defensive strategies.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin