Particle.news
Download on the App Store

OpenAI Launches Trusted Contact for ChatGPT to Flag Serious Self-Harm Risk

The opt-in alert connects at-risk users to a chosen adult through human review under growing safety scrutiny.

Overview

  • OpenAI, which introduced Trusted Contact on Thursday, will alert a nominated adult when a trained team confirms that a ChatGPT conversation signals a serious self-harm concern.
  • Users can add one adult in ChatGPT settings, with the feature activating only after the person accepts within a week, and the contact does not need a ChatGPT account.
  • Automated systems first flag self-harm talk, a small team of trained reviewers assesses the risk, then a brief email, text, or in-app alert goes to the Trusted Contact without sharing chat content.
  • Each notice explains the general concern and links to guidance for sensitive outreach, and ChatGPT continues to direct users to crisis hotlines and emergency services when needed.
  • OpenAI developed the feature with input from clinicians, its Expert Council, and the American Psychological Association as lawsuits and a Florida investigation scrutinize how ChatGPT has handled suicidal users.