Particle.news
Download on the App Store

OpenAI Rolls Out 'Trusted Contact' Alerts in ChatGPT for Self-Harm Risks

The opt-in safeguard signals rising legal and public scrutiny of how chatbots handle people in crisis.

Overview

  • OpenAI began rolling out Trusted Contact on Thursday, May 7, letting adult users name one person who may be alerted if chats show a serious self-harm risk.
  • The system flags concerning language, warns the user and prompts outreach, then a small trained team reviews and, if risk is confirmed, sends a brief email, text, or in-app alert with no chat transcripts.
  • The opt-in tool is available to personal ChatGPT accounts for adults 18+ globally and 19+ in South Korea, with nominated contacts required to accept within a week and workplace tiers not yet supported.
  • OpenAI says the safeguard was developed with clinicians and mental health groups, is not a substitute for professional care, and reviewers aim to assess cases in under an hour.
  • The rollout follows lawsuits and state probes over ChatGPT’s handling of users in crisis, and past OpenAI data showing small user shares flagged for self-harm or psychosis that translate to large numbers at the product’s scale.