Research: Generative AI has not yet significantly enhanced hacking capabilities, and is more often used for spam content and scams.

Deep Tide TechFlow News, May 6th, research conducted by scholars from the University of Cambridge, the University of Edinburgh, and the University of Strathclyde shows that generative AI has not significantly turned hackers into “super hackers.” The research team analyzed 97,895 posts on cybercrime forums following the launch of ChatGPT in November 2022 and manually reviewed over 3,200 posts, finding that 97.3% of the samples were categorized as “Other,” not actual discussions of using AI for crime; only 1.9% involved tools like “vibe coding.”

The study points out that so-called “Dark AI” tools like WormGPT and FraudGPT are more about market hype, with limited production of usable malware. Many posts mainly involve seeking free access, speculation, or complaints about the tools being ineffective. The research suggests that current observable applications of AI in crime mainly focus on high-frequency, low-threshold activities such as bulk SEO spam content, romance scams, voice cloning, image generation, and low-cost AI nude photo generation services.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin