Particle.news
Download on the App Store

OpenAI Launches Optional ‘Trusted Contact’ Alerts for ChatGPT Self-Harm Risks

The move answers rising legal and regulatory pressure over ChatGPT’s handling of self-harm.

Overview

  • Trusted Contact, which OpenAI launched Thursday, lets adult ChatGPT users name someone who can be alerted if a conversation signals a serious self-harm risk.
  • The feature is opt-in and requires the nominated adult to accept an invitation within a week, with eligibility set at 18+ worldwide and 19+ in South Korea.
  • Alerts only go out after automated flags are reviewed by a small, trained team, and OpenAI says it aims to complete these reviews in under one hour.
  • Notifications share no chat transcripts and arrive by email, text, or in-app, and they encourage the contact to check in with guidance for sensitive conversations.
  • OpenAI built the tool with input from clinicians and the American Psychological Association as lawsuits and a Florida investigation continue, though its opt-in design and the ease of running multiple accounts limit its reach.