Title: Invariant and selective representations with applications to visual cortex
6 March 2017, 11h00, room Y506 (Byron building)
The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples. The next phase is likely to focus on algorithms capable of learning from very few labeled examples, like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a “good” representation for supervised learning, characterized by small sample complexity. We use this theoretical framework to model the feedforward part of the primary visual cortex showing how neurons can implement an image representation, invariant to group transformations of the same object but at the same time selective to different objects. Moreover we show how some aspects of the neuroscience of visual recognition find a natural explanation in this context, in particular Gabor like filters in the early stage of visual computation, class specific tuned modules in the brain (e.g. those specialized in the detection of faces) and mirror symmetric tuned cells in the face module of the monkey cortex.