Particle.news
Download on the App Store

Major Study Finds Generative AI Has Yet to Transform Cybercrime

Researchers point to insecure AI products as the nearer-term threat.

Overview

  • Researchers from Cambridge, Edinburgh, and Strathclyde analyzed 97,895 underground forum threads from the CrimeBB corpus using topic modeling, manual reading, and ethnographic observation.
  • Measured use of AI clustered in low-skill, high-volume schemes such as SEO spam, romance scams, AI-made nude images sold cheaply, and social media bot fraud and harassment.
  • Safety rules on major chatbots limited harmful outputs, and most jailbreak techniques stopped working after short bursts.
  • 'Dark AI' services promoted in 2023, including WormGPT-style tools, drew interest but rarely produced working malware according to forum discussions.
  • The study warns that poorly secured agentic AI and AI-written code in legitimate products could open easy attack paths, and that tech layoffs may push more skilled developers toward underground markets.