Seminar by Chris Reinke, Inria Bordeaux
Tuesday, December 10th 2019, 11:00 – 12:00, room F107
INRIA Montbonnot Saint-Martin
Abstract: Humans show impressive learning capabilities, allowing us to adapt efficiently to new and diverse tasks. In artificial intelligence we want artificial systems such as robots to have similar learning abilities. In this talk, I want to discuss how cognitive theories about our learning mechanisms can help us on this way. I will show how cognitive theories inform AI algorithms on two examples from my research:
1) The exploration of complex systems with intrinsically motivated goal exploration processes (IMGEPs). IMGEPs are based on mechanisms of intrinsic motivation during human development. They generate experiments by imagining goals, then try to achieve them by leveraging their past discoveries. The goals are defined in a special goal space that describes the important features of the target system. We apply deep learning methods (variational autoencoders) to learn the features autonomously from raw system observations (images). We used our framework on Lenia, a continuous game-of-life cellular automaton, to discover a variety of complex self-organized visual patterns.
2) An adaptive reinforcement learning framework that is based on findings about distinct brain regions that learn values according to different discount factors. Similarily the RL framework is composed of several Q-learning modules, each with a different discount factor. This allows the framework to learn in parallel a set of skills, each representing a different solution for the payoff between a high reward sum and the time to gain it. Based on its skill library the framework can perform transfer and zero-shot learning to adapt efficiently to new task conditions and goals.
Biography: Chris Reinke received a Bachelor’s (2010) and Master’s (2012) Degree in cognitive science from the University of Osnabrueck (Germany) focusing his studies on artificial intelligence and machine learning. During this time he joined as a research assistant the Neurocybernetics Lab (Prof. Dr. Frank Pasemann) supporting their research in the artificial evolution of neural network controllers for humanoid robots. He received his Ph.D. degree from the Okinawa Institute of Science and Technology (Japan) in 2018 where he was part of the Neural Computation Unit (Prof. Dr. Kenji Doya). He received his doctor for the investigation of an adaptive reinforcement learning framework that is based on findings about human decision-making processes. He joined 2018 the Flowers Lab (Dr. Pierre-Yves Oudeyer) at Inria Bordeaux (France) as a postdoctoral researcher to investigate machine learning methods for the automated exploration of complex systems.