Overview
- Seven lawsuits filed Wednesday in San Francisco federal court mark the first wave targeting OpenAI and CEO Sam Altman over February’s Tumbler Ridge school and household shootings.
- The filings say OpenAI flagged violent planning in June 2025 and safety staff urged contacting the RCMP, yet leaders chose not to alert police and the user returned by opening a new account.
- Plaintiffs claim ChatGPT’s design, including memory and long conversational threads, deepened the shooter’s fixation, and they dispute that the account was truly banned rather than deactivated.
- The suits ask for court orders that require police referrals when systems flag threats, block re‑registration by risky users, add human review before access, and cut off chats that escalate violent ideas.
- OpenAI says it has strengthened safeguards and maintains a zero‑tolerance policy for assisting violence, and CEO Sam Altman apologized on April 23 for not contacting law enforcement.