Overview
- The peer-reviewed study in Nature Neuroscience analyzed brain activity from 130 awake two-month-olds viewing images from 12 everyday categories.
- AI models decoded infants’ neural patterns to predict which type of image they were seeing, demonstrating reliable category representation at two months.
- A longitudinal follow-up captured usable data from 66 infants at nine months, when distinctions—especially living versus inanimate—were markedly stronger.
- Researchers kept babies comfortable and still with reclined beanbags, noise-cancelling headphones, and 15–20 minute scan sessions to enable high-quality data.
- The team describes the dataset as the largest longitudinal awake-infant fMRI effort to date and highlights potential applications for early diagnostics, education, and biologically inspired AI.