Overview
- IWF analysts logged 3,440 AI-generated child sexual abuse videos in 2025, a jump from 13 the year before, within a record 312,030 confirmed CSAM reports overall.
- Nearly two-thirds of the AI videos were classified as Category A, the most extreme level under UK law, with another large share in Category B.
- Regulatory scrutiny intensified as Ofcom kept probing X over Grok’s image tool, California’s attorney general opened an investigation into xAI and Grok, and the EU said it is monitoring X’s actions.
- xAI said it restricted Grok’s ability to edit images of real people in revealing clothing after public outcry over nonconsensual sexualized images of women and minors.
- UK ministers and child-protection groups called for safety-by-design requirements, bans on nudifying apps, and new offenses targeting AI models trained or adapted to generate child sexual abuse material.