Overview
- Three Tennessee plaintiffs identified as Jane Does filed a 44-page federal class-action complaint in San Jose, alleging xAI’s Grok generated and helped distribute child sexual abuse material depicting them.
- The suit says real photos from school and social media were used to create photorealistic nude images and videos that were traded on Discord and Telegram, with image generation allegedly routed through a licensed third-party app using xAI’s servers.
- Local police arrested a suspect in December 2025 after investigators linked the AI-generated files to a cache featuring at least 18 girls, and some content continues to circulate online.
- xAI limited Grok’s image-editing features in January and blocked “undressing” functions, while Elon Musk stated he was not aware of any naked underage images generated by Grok; the complaint highlights prior promotion of a “Spicy” mode.
- The plaintiffs seek class status, a permanent injunction, and statutory damages including at least $150,000 per violation under Masha’s Law, citing research estimating millions of sexualized images by Grok and thousands that may depict children.