Slap in the face to the AI horror cyberattack theory! Study: Hackers love AI nude images, so why don't they prefer Vibe Coding?

Studies indicate that generative AI has not produced super hackers; most are only used for low-level crimes like SEO scams or creating nude photos. Hackers worry about skill degradation and refuse to rely too heavily on AI; the real cybersecurity concern is the flow of unemployed tech talent into the black market.

Debunking the AI horror cyberattack theory, a paper states that generative AI has not spawned super hackers

In recent years, cybersecurity firms, government agencies, and AI tech giants have repeatedly warned that generative AI will lead to a new generation of powerful super hackers, but a recent paper challenges this view.

Authored jointly by researchers from Cambridge University, Edinburgh University, and Strathclyde University, the paper titled “Stand-Alone Complex or Vibercrime?” explores the actual impact of generative AI on cybercrime, directly challenging the assumption that AI will trigger catastrophic cyberattacks.

The research team analyzed over 15 years of hacker forum data and found that current cyber threats remain quite ordinary. In most cases, AI is mainly used to optimize existing automated scams, search engine optimization (SEO) fraud, and handle low-level administrative tasks. The public’s imagination of super hackers is mostly just using ChatGPT to write spam or generate nude photos for profit.

Over 90% of hacker forum discussions are unrelated to AI crimes

To understand the true nature of underground cybercrime circles, the research team extracted and analyzed 97,895 posts from the Cambridge Cybercrime Center’s database since November 2022, following the release of ChatGPT.

They used topic modeling for analysis and manually reviewed over 3,200 posts. The results show that generative AI has not substantially lowered the technical barrier for novices to enter cybercrime.

Data indicates that up to 97.3% of the samples were categorized as “Other,” meaning these discussions are unrelated to AI-driven crimes, with only 1.9% involving the use of Vibe Coding tools.

Image source: research study. Research finds that over 90% of hacker forum discussions are unrelated to AI crimes.

“Dark AI chatbots” are mostly marketing gimmicks

Looking back at 2023, AI chatbots claiming malicious capabilities, such as WormGPT and FraudGPT, dominated media coverage.

However, researchers found from forum data that most posts about dark AI products are users begging for free access or complaining that these tools don’t work at all.

A well-known dark AI service developer even admitted to forum members that the product was purely a marketing stunt, essentially just an unrestricted version of ChatGPT.

The study notes that by the end of 2024, jailbreak methods for mainstream models have become disposable tools, often failing within a week. While open-source models can be jailbroken indefinitely, they are extremely resource-intensive to run and lack updates, indicating that current AI system security measures are indeed effective.

Hackers dislike Vibe Coding, worry about skill degradation

The paper also directly responds to a report published by Anthropic in August 2025, claiming that Claude Code was used for cyber extortion against 17 organizations. However, this pattern was not found in the underground forums studied.

In the forums surveyed, the main use of AI coding assistants is as an autocomplete tool for skilled programmers; low-skilled attackers still prefer using ready-made, effective scripts.

One forum user warned that AI-assisted coding could amplify the risks of insecure code; another hacker directly stated that over-reliance on Vibe Coding could lead to rapid “skill” degradation.

The real use of AI in cybercrime: spam content and romance scams

From this paper, it appears that AI’s actual role in aiding criminals is mostly at the bottom of the food chain.

For example, SEO scammers are using AI models to produce大量垃圾文章; romance scammers and online pornography operators are starting to incorporate AI voice cloning technology; opportunists seeking quick wealth are mass-producing AI e-books and selling them for $20 each.

The most disturbing market involves nude photo generation services. Some vendors claim they can use AI to make any girl strip, charging from $1 per photo, $8 for 10 photos, $40 for 50 photos, to $75 for 90 photos.

The researchers conclude by emphasizing that the biggest way AI disrupts the cybercrime ecosystem is through developers laid off from legitimate tech companies turning to underground markets. Economic downturns and a sluggish job market are the main reasons skilled legitimate developers enter scams and cybercrime communities.

Further reading:
Microsoft AI CEO: AI will automate white-collar jobs within 18 months, but may also lead to major security incidents

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin