Particle.news
Download on the App Store

AMA Presses Congress to Regulate Mental-Health Chatbots

The doctors' group seeks enforceable rules to replace voluntary practices so the tools augment, not supplant, clinical care.

Overview

  • The AMA, which wrote to the Congressional AI Caucus, the Congressional Digital Health Caucus, and the Senate AI Caucus Wednesday, urged strict guardrails for mental-health chatbots.
  • The letters warn of harms such as poor crisis response, misinformation, emotional dependency, and privacy risks, citing reports of self-harm prompts and a finding that 58% of users did not follow up with a clinician.
  • The group asks that chatbots disclose they are AI, be barred from posing as licensed clinicians, and avoid diagnosis or treatment without formal regulatory review.
  • It urges built-in crisis detection that routes people to suicide hotlines or medical care, continuous safety monitoring with adverse-event reporting, and tougher standards for tools used by minors.
  • The AMA also seeks tight limits on data collection, explicit consent for sharing sensitive details, oversight of third-party components, and curbs on in-app ads, which could mean new compliance work for developers and stronger privacy for users.