Research

Scientific context

Artificial intelligence has become a key element in most scientific fields and is now part of everyone life thanks to the digital revolution. Statistical, machine and deep learning methods are involved in most scientific applications where a decision has to be made, such as medical diagnosis, autonomous vehicles or text analysis. Such methodologies have also significant implications in fields where the understanding of a phenomenon from data is needed. This is the case for instance in medicine, biology, astrophysics or digital humanities where learning methods allow to recover hidden patterns in the data and to visualize them.

The recent and highly publicized results of artificial intelligence should not hide the remaining and new problems posed by modern data. Indeed, despite the recent improvements due to deep learning, the nature of modern data have brought specific issues. For instance, learning with high- dimensional, atypical (networks, functions, …), dynamic, or heterogeneous data remains difficult for theoretical and algorithmic reasons. The recent establishment of deep learning has also open new questions such as: How to learn in an unsupervised or weakly-supervised context with deep architectures? How to design a deep architecture for a given situation? How to learn with evolving and corrupted data?

With the creation of this project team, we intend to carry out research at the interface of high- dimensional statistics, model selection, deep learning, and other related methods for modeling and learning from massive volumes of complex, uncertain, and feature-rich data, often characterized by complex long-range and hierarchical dependencies among the involved variables. One major goal of this project-team is to bring together researchers in Mathematics and Computer Science jointly working on these topics for pursuing high-impact research in the artificial intelligence area. In particular, we will be vigilant in order to validate the scaling of the models produced by validating the main proposals through experimental implementations.

Research strategy

The research strategy of the Maasai team is three-fold:

  • First, the Maasai team will conduct a research at the interface of statistics and machine learning in order to address AI problems both on the theoretical and algorithmic aspects. It is worth noticing that this approach is the one applied by the GAFAs for years, but it is unfortunately less frequent in Academics.
  • Second, the Maasai team will will conduct a research that links practical problems, that may come from industry or other scientific fields, with the theoretical aspects of Mathematics and Computer Science. In this spirit, the Maasai team is totally aligned with the “Core elements of AI” part of the Institut 3IA Côte d’Azur and the team is committed to be very active within the institute.
  • Finally, the Maasai team will conduct a research in the idea of transfer and innovation, by doing the developments that will allow the transfer of the theoretical results into software, patents but also startups.

Regarding the recently funded Institut 3IA Côte d’Azur, the Maasai project-team will be of course one of the pillars of the Axis 1 “Core elements of AI” through its research projects, the chair of Charles Bouveyron and the collaborations with companies that are partners of the institute. Even though most of the contributions of Maasai to the Institut 3IA Côte d’Azur will be for the Axis 1, the team member will also probably contribute to the axes 2 and 4, ie. “Computational Medicine” and “Smart Territories”, trough interdisciplinary and industrial collaborations in medicine, digital humanities, transport, energy and defense.

Scientific objectives

Within the research strategy explained above, the Maasai project-team aims at developing statistical, machine and deep learning methodologies and algorithms to address the following four axes.

  • Axis A – Unsupervised learning: The first research axis will be the development of models and algorithms designed for unsupervised learning with modern data. Let us recall that unsupervised learning is to date the most challenging learning task. Indeed, if supervised learning has seen emerging powerful methods in the last decade, their requirement for huge annotated dataset remains an obstacle for their extension to new domains. In addition, the nature of modern data significantly differs from usual quantitative or categorical data. We ambition in this axis to propose models and methods explicitly designed for unsupervised learning on data such as high-dimensional, functional, dynamic or network data. All these types of data are massively available nowadays in everyday life (omics data, smart cities, …) and they remain unfortunately difficult to handle efficiently for theoretical and algorithmic reasons. The dynamic nature of the studied phenomena is also a key point in the design of reliable algorithms.
  • Axis B – Theory of deep learning: The second research axis will be focused on the theory of deep learning methods. Although deep learning methods are obviously at the heart of artificial intelligence, they clearly suffer from an overall weak knowledge of their theoretical foundations and behaviour, leading to a lack of understanding. These issues are barriers to the wide acceptation of the use of AI in sensitive applications, such as medicine, transport or defense. We aim at combining statistical (generative) models with deep learning algorithms to justify existing results, and allow a better understanding of their performances and their limitations. In particular, we will focus on unsupervised learning, architecture selection, decision interpretability and active learning tasks, which received less attention than for easier models while they are critical in most cases.
  • Axis C – Adaptive and robust learning: The team will also focus on adaptive and robust learning, which are key elements in the numerical world where the evolutions are fast and the data quality is questionable in some cases. Indeed, a main issue of supervised learning algorithms is their ability to account for evolutions in the modeled phenomenon. Such situations may appear when an evolution has occurred between the learning and prediction phases or when a prediction model wants to be transferred to another similar problem. Deep learning algorithms have shown a relative (and not fully understood) robustness to these paradigms. However, a specific modeling of these contexts may of course greatly improve the performances and even allow to detect anomalies. Another related task concerns the extraction of weak relevant signals deeply buried in the data. In particular, it is is reasonable to assume in some applications that subtle phenomena at a very fine scale are responsible for the overall state of a system. We will aim at developing methods able to detect these weak signals.
  • Axis D – Learning with heterogeneous data: The last research axis will be devoted to “how to learn with heterogeneous data?”. Heterogeneous data are indeed parts of the most important and sensitive applications of artificial intelligence. As a concrete example, in medicine, the data recorded on a patient in an hospital range from images to functional data and networks. It is obviously of great interest to be able to account for all data available on the patients to propose a diagnostic and an appropriate treatment. Notice that this also applies to autonomous cars, digital humanities and biology. Proposing unified models for heterogeneous data is an ambitious task, but first attempts (e.g. Linkage1 project) on combination of two data types have shown that more general models are feasible and significantly improve the performances. We will also address the problem of conciliating structured and non- structured data, as well as data of different levels (individual and contextual data).

Comments are closed.