Overview
- Researchers recorded activity of individual visual-cortex neurons via calcium imaging as mice watched films, then decoded vision with a dynamic model that incorporates movement and pupil data.
- The team began from a blank-screen prediction and iteratively updated pixels using differences from measured activity, producing reconstructions of new 10-second clips.
- Reconstruction fidelity increased with more neurons sampled, and similarity was assessed via pixel-by-pixel correlation between original and reconstructed frames.
- Temporal alignment between videos was strong, whereas spatial resolution and visual field coverage remain limited and are the next targets for improvement.
- The peer-reviewed study by Joel Bauer and colleagues appears in eLife (2026, DOI: 10.7554/eLife.105081.3) and outlines potential for studying comparative perception and visual disorders.