Since the pioneering work of the 1981 Nobel Prize winners David Hubel and Torsten Wiesel, our understanding of biological visual systems has been dominated by the feed-forward, linear filtering approach. The objective was then to unveil which features are extracted by neuronal selective tuning functions and how the cascade of cortical areas yields to increasingly complex receptive fields, able to encode more complex features and shapes. The main experimental tools were local stationary stimuli designed to unveil steady state, piecewise information processing. Accordingly, most current models of visual processing were designed with the same logic: a feed-forward cascade of linear filters and static non-linearities. The dominant view of biological visual motion processing is the perfect illustration of this mainstream.
However, this approach largely ignores two main aspects of visual processing. First, natural inputs are almost always ambiguous, dynamic and non-stationary as, for instance, objects moving along complex trajectories. To process them, the visual system must segment them from the scene, estimate their position and direction over time and predict their future location and velocity. Second, each of these processing steps, from the retina to the highest cortical areas, is implemented by an intricate interplay of feed-forward, feedback and horizontal interactions. For instance, at each stage, a moving object will not only be processed locally, but also generate lateral and feedback propagation at each retinal and cortical stage. Thus, it is still unclear how the early visual system processes complex features and shapes. This question is particularly challenging, as it requires probing such sequences of events at multiple scales (from single cell to large recurrent networks) and multiple stages (retina, primary visual cortex (V1)).
In the context of the project Trajectory, we propose such an integrated approach. Using state-of-the-art micro- and meso-scopic recording techniques and combining experimental and modeling approaches, we aim at dissecting the population responses at two key stages of visual motion encoding: the retina and V1. We shall address the following questions:
- How is a translating bar represented within a hierarchy of visual networks and for which condition does it yield anticipatory responses? What are the mechanisms underlying anticipation along the target trajectory?
- How is visual processing shaped by the recent history of motion along a more or less predictive trajectory? Are early visual networks propagating predictive information about the nature (orientation, direction) of the moving object
- How much does encoding in V1 simply reflect transformations occurring already at the retinal level and how much is added by cortical information processing?
These questions will be considered both at the experimental and theoretical/computational levels.
Institut des Neurosciences de la Timone (France, Marseille), Institut de la Vision (France, Paris), Institut of Neurosciences in Valparaiso (Chile), and Inria Biovision team (France, Sophia Antipolis)