Speaker: Xavier Hinaut, Inria Bordeaux, Mnemosyne Team.
Date and place: March 11, 2021 at 10:30, VISIO-CONFERENCE
General neural mechanisms of encoding, learning and production of complex sequences (and their syntax) are still to be unveiled. Modelling such mechanisms can be tackled from different perspectives: from the neuronal modelling of motor sequences categories in monkeys, to decoding sensory-motor neuronal activity in songbirds, to the modelling of human sentence processing with recurrent neural networks. The latter is the starting point of applications to Human-Robot Interaction through natural language. In particular, one of the goals is to obtain the same generic artificial neural substrate that can learn syntax in several languages and model the learning of motor and vocal sequences. For instance, this involves modelling the learning of songs by birds while approaching biological and developmental constraints. Although we cannot speak of “language of birds”, some birds such as domestic canaries produce songs with complex syntax that is difficult to describe in Markovian terms alone (memory of the previous state only). Sequence processing often involves (short-term) working memory, which is why we are also studying the memory capabilities and limitations of random recurrent neural networks (such as Reservoir Computing). In particular, we look at how such networks can learn from information gating mechanisms.