Overview
- ECRI released its 18th annual Top 10 Health Technology Hazards list on Jan. 21, placing chatbot misuse at the top for 2026.
- ECRI’s testing documented a chatbot wrongly approving electrosurgical return electrode placement on the shoulder blade, a mistake that could cause burns.
- Public LLMs such as ChatGPT, Claude, Copilot, Gemini and Grok are not regulated as medical devices or validated for care, even as usage grows; OpenAI says more than 40 million people seek health information on ChatGPT daily.
- The report urges AI governance committees, clinician training, verification of chatbot outputs with experts, and regular audits, with a webcast scheduled for Jan. 28 to explain the guidance.
- Other major risks on the list include potential “digital darkness” outages, substandard and falsified medical products, and recall communication gaps for home diabetes technologies.