Video-evoked responses are reliably mapped across occipital, temporal, and parietal cortices.
Human-written sentence descriptions of the videos correlate more strongly with brain activity than simple labels like "object" or "action".
The video file is a specific stimulus from the BOLD Moments Dataset (BMD) , a large-scale fMRI dataset designed to study how the human brain processes short visual events.
To help you find more specific details, are you looking for the of the video clips (like frame rate or resolution) or the fMRI processing pipeline used in the paper?
Data and pre-trained models (like the TSM ResNet50 used in the study) are available on GitHub .
The dataset contains 1,102 three-second naturalistic videos sampled from the Moments in Time (MiT) and Memento10k datasets.
The BOLD signal tracks the internal temporal structure of these 3-second events, meaning early and late parts of the signal correspond to early and late parts of the video.
The "complete paper" associated with this dataset and its corresponding video clips is: published in Nature Communications (July 2024). Paper & Dataset Overview