Particle.news
Download on the App Store

Deepfake X-Rays Can Fool Radiologists and AI, Study Finds

The authors call for watermarking plus cryptographic signing to protect medical records from tampering.

Overview

  • The peer-reviewed Radiology study, published Tuesday, found AI-generated X-rays realistic enough to mislead experts and leading AI models.
  • Seventeen radiologists from 12 centers reviewed 264 images split evenly between real and synthetic, with only 41% suspecting fakes before being told and average accuracy rising to 75% afterward.
  • Four multimodal AI systems—OpenAI’s GPT-4o and GPT-5, Google’s Gemini 2.5 Pro, and Meta’s Llama 4 Maverick—reached only about 57% to 85% accuracy in spotting fakes.
  • Researchers cataloged telltale signs of synthetic images, including overly smooth bones, unnaturally straight spines, overly symmetrical lungs, uniform vessel patterns, and unusually clean, one-sided fractures.
  • The team warned of fraud and cyber risks and urged invisible watermarks and technologist-linked cryptographic signatures, released an educational deepfake dataset with quizzes, and cautioned that 3D fakes in CT and MRI are likely next.