Neural modelling for learning complex sequences: applications to human-robot interactions and songbirds

Speaker: Xavier Hinaut, Inria Bordeaux, Mnemosyne Team.

Date and place: March 11, 2021 at 10:30, VISIO-CONFERENCE

Abstract:

General neural mechanisms of encoding, learning and production of complex sequences (and their syntax) are still to be unveiled. Modelling such mechanisms can be tackled from different perspectives: from the neuronal modelling of motor sequences categories in monkeys, to decoding sensory-motor neuronal activity in songbirds, to the modelling of human sentence processing with recurrent neural networks. The latter is the starting point of applications to Human-Robot Interaction through natural language. In particular, one of the goals is to obtain the same generic artificial neural substrate that can learn syntax in several languages and model the learning of motor and vocal sequences. For instance, this involves modelling the learning of songs by birds while approaching biological and developmental constraints. Although we cannot speak of “language of birds”, some birds such as domestic canaries produce songs with complex syntax that is difficult to describe in Markovian terms alone (memory of the previous state only). Sequence processing often involves (short-term) working memory, which is why we are also studying the memory capabilities and limitations of random recurrent neural networks (such as Reservoir Computing). In particular, we look at how such networks can learn from information gating mechanisms.

Presenter’s Bio:

The work of Xavier Hinaut is at the frontier of various domains (neurosciences, machine learning, robotics and linguistics): from the modeling of the neuronal encoding of primate motor sequences categories to the decoding of sensorimotor neuronal activity in songbirds (domestic canaries). An important part is dedicated to the modeling of the processing of human sentences with recurrent neural networks, which is the starting point of applications to Human-Robot Interaction with natural language.
He is interested in the neural mechanisms of encoding, learning and producing complex sequences and their syntax. One of the translational goals is to find a common generic neural substrate model, based on random recurrent neural networks.
Such generic substrate can learn the bases of syntax in several languages and model the learning of motor and vocal sequences in various species. Such model is also applied to the learning of bird singing by approaching biological and developmental constraints. Although we cannot talk about the language of birds, some birds like domestic canaries produce songs with a complex syntax that is difficult to characterize only in terms of Markovian processes (i.e. with transitions based on very short-term memory). Therefore, he also studies the working memory capabilities and limitations of recurrent random neural networks. In particular, he looks how such networks can learn information gating mechanisms.