Overview
- Meta introduced TRIBE v2 as a foundation AI model that predicts typical patterns of cortical activity from visual, auditory and language inputs.
- The model was trained on large fMRI datasets collected while volunteers watched movies and listened to podcasts, giving it broad, real‑world coverage across senses.
- Meta reports about a 70‑fold boost in spatial resolution over prior systems along with faster run time and zero‑shot predictions for new people and languages.
- Because single fMRI scans can be noisy, Meta says TRIBE v2’s predicted response can sometimes match the population average more closely than one recording.
- Meta released the paper, code and model weights as open source to enable in‑silico experiments, with some outlets invoking superintelligence while neuroscience coverage stresses this is not mind‑reading.