Modeling the visual cortex through functional brain imaging

Seeing it all: Convolutional network layers map the function of the human visual system

• Convolutional network layer image representations explain ventral stream fMRI.
• This mapping follows the known hierarchical organization.
• Results from both static images and video stimuli.
• A full brain predictive model synthesizes brain maps for other visual experiments.
• Only deep models can reproduce observed BOLD activity.

Second order scattering descriptors predict fMRI activity due to visual textures

Second layer scattering descriptors are known to provide good classification performance on natural quasi-stationary processes such as visual textures due to their sensitivity to higher order moments and continuity with respect to small deformations. In a functional Magnetic Resonance Imaging (fMRI) experiment we present visual textures to subjects and evaluate the predictive power of these descriptors with respect to the predictive power of simple contour energy – the first scattering layer. We are able to conclude not only that invariant second layer scattering coefficients better encode voxel activity, but also that well predicted voxels need not necessarily lie in  known retinotopic regions (see Fig. below ).

More details can be found here


Some brain regions are better explained by using two scattering layers rather than one (middle). These regions are symetric across hemispheres, and are observed mostly in the dorsal stream of the visual cortex. An atlas of the visual areas (left and right) shows that the mai foci are found in the V1, V2, V3AB and IPS0 regions.

Back to Decoding page

Leave a Reply