Partnership with Accenture

Accenture contract on the topic “Distributed Machine Learning for IoT applications” (December 2019 – November 2023)

  • Participants: Giovanni Neglia, Othmane Marfoq
  • Collaborators: Laetitia Kameni, Richard Vidal

IoT applications will become one of the main sources to train data-greedy machine learning models. Until now, IoT applications were mostly about collecting data from the physical world and sending them to the Cloud. Google’s federated learning already enables mobile phones, or other devices with limited computing capabilities, to collaboratively learn a machine learning model while keeping all training data locally, decoupling the ability to do machine learning from the need to store the data in the cloud. While Google envisions only users’ devices, it is possible that part of the computation is executed at other intermediate elements in the network. This new paradigm is sometimes referred to as Edge Computing or Fog Computing. Model training as well as serving (provide machine learning predictions) are going to be distributed between IoT devices, cloud services, and other intermediate computing elements like servers close to base stations as envisaged by the Multi-Access Edge Computing framework. The goal of this project is to propose distributed learning schemes for the IoT scenario, taking into account in particular its communication constraints. Othmane Marfoq is funded by this project. A first 12-month pre-PhD contract has been followed by a PhD grant.

Accenture “Plan de Relance” (PLR) contract on the topic “Energy-Aware Federated Learning” (October 2022 – September 2024)

  • Participants: Giovanni Neglia, Charlotte Rodriguez
  • Collaborators: Laura Degioanni, Laetitia Kameni, Richard Vidal

Deep neural networks have enabled impressive accuracy improvements across many machine learning tasks. Often the highest scores are obtained by the most computationally-hungry models. As a result, training a state-of-the-art model now requires substantial computational resources which demand considerable energy, along with the associated economic and environmental costs. Research and development of new models multiply these costs by thousands of times due to the need to try different model architectures and different hyper-parameters. In this project, we investigate a more algorithmic/system-level approach to reduce energy consumption for distributed ML training over the Internet. The postdoc of Charlotte Rodriguez is funded by this project.

Comments are closed.