Particle.news
Download on the App Store

Study Finds AI‑Made X‑Rays Can Fool Radiologists and AI Models

The peer‑reviewed study shows a verification gap that could let fake scans slip into patient records.

Overview

  • The Radiology paper, published Tuesday, tested 264 X‑rays split evenly between real and synthetic with 17 radiologists from 12 centers across six countries.
  • Radiologists flagged the presence of fakes only 41% of the time when unaware of the test, then reached 75% accuracy at separating real from fake once told deepfakes were included.
  • Four multimodal AI systems scored roughly 57% to 85% on the task, and even the model family used to generate many of the images failed to spot all of them.
  • The team logged telltale cues such as overly smooth bones, unnaturally straight spines, symmetrical lungs, uniform vessels, and improbably clean fractures, and found musculoskeletal specialists outperformed peers despite no link to years of experience.
  • Authors warned that convincing fakes could skew diagnoses, research data, insurance claims, and legal evidence, and they released a teaching dataset while urging invisible watermarks and technologist‑linked cryptographic signatures as safeguards, with concern that CT and MRI could be next.