[Closed] Master Internship on Automated Discovery of Moving Lifeforms in Celluar Automata using Deep Neural Networks

Project

We are surrounded by dynamic and complex systems ranging from the formation process of galaxies to production systems, such as for the synthesis of medical drugs. Exploring these systems in simulation or by experiment helps us to understand them and to discover new and interesting behaviors and outcomes. This project aims at investigating new methodologies for the automated exploration of such systems.

For this purpose, we use as a model system Lenia [1,2], a continuous Cellular Automata similar to Conways’ Game of Life [3]. Lenia allows the creation and simulation of life-like creatures resembling microorganisms. Our goal is to find new and interesting creatures. We are using an Intrinsically Motivated Goal Exploration Procedure (IMGEP) [4] for this exploration. Instead of searching directly in Lenias’ high-dimensional search space, it is learning a low-dimensional feature space that describes the creatures. Then it uses this smaller space as a goal space to search for creatures with unseen features. In the past, we used a deep variational autoencoder (VAE) that learned static features (form and texture) to discover many new creatures [5]. Unfortunately, these were mainly static, as the learned feature space describes only static features.

In this project, we want to go to the next step, by learning feature spaces that describe also temporal dynamics (movement behaviors). For this purpose, we want to investigate the usage of Dynamical VAEs (DVAEs) [6] and transformer networks [7]. These models represent the state-of-the-art for the processing and generation of sequential data, ranging from language processing [8] to video generation [9].

Task  

During your internship you will be reviewing relevant literature about DVAE and Transformers under the guidance of your supervisor. Together you will decide which ones to use and how they might be further improved for the exploration in Lenia. You will then implement them and run exploration experiments to compare them. Finally, the goal is to report your findings in a comprehensive manner in form of a small research paper.

Team

You will join RobotLearn, an international team of researchers and students at Inria Grenoble. Besides its research into robotics, the team has a strong background in probabilistic models for the analysis and generation of sequential data such as video and audio recordings. You will be supervised by Chris Reinke (Postdoc) and Xavier Alameda-Pineda (head of the team) during your internship.

Requirements

Our main requirements are 1) motivation, 2) general knowledge about Machine Learning and Artificial Intelligence, and 3) knowledge of Python programming. Knowledge of PyTorch is not mandatory but a plus. The internship should start in the beginning of 2023 and has a duration of 5-6 months.

Conditions

The internship should start at the beginning of 2023. It has a duration of 5 to 6 months. There will be a compensation of 500 – 600 Euro per month. Additionally, you will receive subsidized lunch meals (one lunch costs 2 – 4 Euro). You will have a dedicated working space at Inria with a workstation that has a GPU. Moreover, you will have access to one CPU and two GPU clusters to run experiments.

Application

Please send an e-mail to chris.reinke@inria.fr including a paragraph about your motivation, your CV, and a recent transcript of your grades.

References

[1] Chan, B. W. C. (2018). Lenia-biology of artificial life. arXiv preprint arXiv:1812.05433.
[2] Lenia Web Demo: https://chakazul.github.io/Lenia/JavaScript/Lenia.html
[3] https://en.wikipedia.org/wiki/Conway’s_Game_of_Life
[4] Forestier, S., Portelas, R., Mollard, Y., & Oudeyer, P. Y. (2022). Intrinsically motivated goal exploration processes with automatic curriculum learning. J. Mach. Learn. Res.
[5] Reinke, C., Etcheverry, M., & Oudeyer, P. Y. (2019). Intrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems. In International Conference on Learning Representations.
[6] Girin, L., Leglaive, S., Bie, X., Diard, J., Hueber, T., & Alameda-Pineda, X. (2021). Dynamical Variational Autoencoders: A Comprehensive Review. Foundations and Trends in Machine Learning, 15(1-2), 1-175.
[7] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
[8] https://en.wikipedia.org/wiki/GPT-3
[9] YYan, W., Zhang, Y., Abbeel, P., & Srinivas, A. (2021). VideoGPT: Video Generation using VQ-VAE and Transformers. arXiv e-prints, arXiv-2104.

Comments are closed.