Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Slap in the face to the AI horror cyberattack theory! Study: Hackers love AI nude images, why don't they prefer Vibe Coding?
Studies indicate that generative AI has not produced super hackers; most are only used for low-level crimes like SEO scams or creating nude photos. Hackers worry about skill degradation and refuse to rely too heavily on AI; the real cybersecurity concern is the flow of unemployed tech talent into the black market.
Debunking the AI horror cyberattack theory, a paper states that generative AI has not spawned super hackers
In recent years, cybersecurity firms, government agencies, and AI tech giants have repeatedly warned that generative AI will lead to a new generation of powerful super hackers, but a recent paper challenges this view.
Authored jointly by researchers from the University of Cambridge, the University of Edinburgh, and Strathclyde University, the paper titled “Stand-Alone Complex or Vibercrime?” explores the actual impact of generative AI on cybercrime, directly challenging the assumption that AI will trigger catastrophic cyberattacks.
The research team analyzed over 15 years of hacker forum data and found that current cyber threats remain quite ordinary. In most cases, AI is mainly used to optimize existing automated scams, search engine optimization (SEO) fraud, and handle low-level administrative tasks. The public’s imagination of super hackers is mostly just using ChatGPT to write spam or generate nude photos for profit.
Over 90% of hacker forum discussions are unrelated to AI crimes
To understand the true nature of underground cybercrime circles, the research team extracted and analyzed 97,895 posts from the Cambridge Cybercrime Center’s database since the release of ChatGPT in November 2022.
They used topic modeling for analysis and manually reviewed over 3,200 posts. The results show that generative AI has not substantially lowered the technical barrier for novices to enter cybercrime.
Data indicates that up to 97.3% of the samples were categorized as “Other,” meaning these discussions are unrelated to AI-driven crimes, with only 1.9% involving the use of Vibe Coding tools.
Image source: research study. Research finds that over 90% of hacker forum discussions are unrelated to AI crimes
“Dark AI chatbots” are mostly marketing gimmicks
Looking back at 2023, AI chatbots claiming malicious capabilities, such as WormGPT and FraudGPT, dominated media coverage.
However, researchers found from forum data that most posts about dark AI products are users begging for free access or complaining that these tools don’t work at all.
A well-known dark AI service developer even admitted to forum members that the product was purely a marketing stunt, essentially just an unrestricted version of ChatGPT.
The study notes that by the end of 2024, jailbreak methods for mainstream models have become disposable tools, often failing within a week. While open-source models can be jailbroken indefinitely, they are extremely resource-intensive to run and lack updates, indicating that current AI system security measures are indeed effective.
Hackers dislike Vibe Coding, fearing skill degradation
The paper also directly responds to a report published by Anthropic in August 2025, claiming that Claude Code was used for cyber extortion against 17 organizations. However, this pattern was not found in the underground forums studied.
In the forums surveyed, the main use of AI coding assistants is as an autocomplete tool for skilled programmers; low-skilled attackers still prefer ready-made, effective scripts.
One forum user warned that AI-assisted coding could amplify the risks of insecure code; another hacker directly stated that over-reliance on Vibe Coding would lead to rapid “hacker skill” deterioration.
The real use of AI in cybercrime: spam content and sextortion
From this paper, it appears that AI’s actual role in aiding criminals is mostly at the bottom of the food chain.
For example, SEO scammers are using AI models to produce大量垃圾文章; romance scams and online pornography operators are starting to incorporate AI voice cloning technology; opportunists seeking quick wealth are mass-producing AI e-books and selling them for $20 each.
The most disturbing market involves nude photo generation services. Some vendors claim they can use AI to make any girl undress, charging from $1 per photo, $8 for 10 photos, $40 for 50 photos, to $75 for 90 photos.
In conclusion, researchers emphasize that the biggest way AI disrupts the cybercrime ecosystem is through developers laid off from legitimate tech companies turning to underground markets. Economic downturns and a sluggish job market are the main reasons skilled legitimate developers enter scams and cybercrime communities.
Further reading:
Microsoft AI CEO: AI will automate white-collar jobs within 18 months, but may also lead to major cybersecurity incidents