Thomas Bouvier

PhD student, Inria

Member of the KerData research team at Inria Rennes – Bretagne Atlantique Research Center and IRISA.

Thomas Bouvier - doctorant - équipe KERDATA / Centre Inria de l’Université de Rennes

Contact details

IRISA / Inria Rennes – Bretagne Atlantique
Office: bâtiment 12D, D174 (orange level)
Université de Rennes 1 – Campus de Beaulieu
35042 Rennes, France

thomas.bouvier@inria.fr


Research Interests

My main current research interests are related to continual learning workloads deployed in a distributed setting (HPC, cloud, fog, edge).

Relevant topics:

  • Parallel and distributed deep learning
  • Continual learning
  • Cloud data services and big data analytics
  • Stream processing

Teaching

  • INSA Rennes, Masters: Stream processing – lectures and practical sessions (12 hours, starting dec. 2023)
  • ISTIC Rennes Masters: Database optmization – practical sessions (30 hours, starting oct. 2023)
  • INSA Rennes, Masters: Stream processing – lectures and practical sessions (12 hours, starting dec. 2022)
  • INSA Rennes, Bachelor: Databases – lectures and practical sessions (36 hours, starting sept. 2021)
  • INSA Rennes, Bachelor: Introduction to algorithms – lectures and practical sessions (42 hours, starting jan. 2022)

Master student

  • Malvin Chevallier (2023) co-advised with Alexandru Costan (Inria). Topic: “Study of regularization techniques applied to rehearsal-based continual learning”

Projects

I am involved in the “Towards Continual Learning at Scale” JLESC project, where we are adapting continual learning algorithms to be executed at supercomputer scale.

Publications

A list of publications can be found on Google Scholar.

Software

  • Neomem: a software prototype designed to mitigate the issue of catastrophic forgetting that arises when training neural networks with continuously generated data. By employing the rehearsal approach, Neomem effectively retains previously observed data, preserving associated knowledge. Neomem’s advanced data management capabilities enable data-parallel training across dozens of GPUs, ensuring both scalability and excellent predictive performance of continual learning workloads.

Comments are closed.