Offers

Engineer Positions

  1. Ultra Low-power AI for Embedded Devices
  2. Low-power IoT embedded software platform and secure DevOps
    • Description: Software-defined paradigms begin to penetrate IoT. In such paradigms, some parts of the data-processing logic can be transferred on-demand, on-the-fly via the network, from the cloud to the device, to re-program IoT devices, to match either agile programming or privacy requirements. In this context team leading lpSD-IoT has published a string of papers describing different low-power runtime containers prototypes using lightweight virtualization based on Javascript or eBPF for instance. This project aims to provide a general solution for code snippet containerization and deployment on heterogeneous low-power devices. The output of the project is targeted so as to not only yield a novel design we will describe in a high-level academic conference, but also an open source implementation integrated in practice, upstream in a major OS in the field, for instance RIOT.

Post-Doc Positions

  1. Distributed Machine Learning, from ultra-low-power devices to Edge
  2. Mobility-aware Edge Computing for 5G
    • A direct consequence of hosting resources in a distributed way at the Edge is their exposure and sensitivity to the heterogeneity, massiveness, and uncertainty in mobility and demands of smart devices, leading to non-optimal edge usage in the long run. In this project, we aim to deal with such impacting factors in devices behaviors. Our solutions will bring perceptive and aware mobility/demand quasi-in-time anticipation, uncertainty handling, and self-adaptation of device-edge management. We focus on smart devices, where perceptiveness and awareness of needs and behaviours (where, when, and for what resources are required) of users and applications dictate decision, reaction/action, and allocation/management at the edge. We will first leverage our knowledge on modeling, uncertainty profiling, interpretative predictability, and personalized anticipation of mobility behaviors as well as of resource demands of networking users.
  3. Grant-Free Modern Random Access
    • A recent family of random access protocols, sometimes called “modern random access”, has been popularized in the last decade: the family of IRSA protocols (Irregular Repetition Coded Slotted ALOHA). It is based on successive interference cancellation (SIC) and it has a noticeable intersection with some NOMA techniques, but can also operate with any packet transmission technique. In this post-doc, we will study variants of modern random access by considering realistic physical layer features and methods developed in this project; we will also study the improvement with pre-computed sequences, possibly constructed with AI/ML techniques, we will consider lightweight node synchronization, and finally we will plan for actual experimentation (using a real testbed – CortexLab).

PhD Positions

  1. Distributing learning models along the continuum IoT-edge-cloud
    • Deep learning-based applications have become increasingly prevalent in many industries, helping drive innovation and improve efficiency across various use cases. Classical deep-learning approaches consist of sending all the data to remote servers with high computing capabilities, which may induce substantial delays, or considering on-device model execution, which may result in unsatisfactory accuracy due to limited resources. We aim to explore how to distribute deep learning algorithms and models along the end-device/Edge/Cloud continuum. The methodology that we propose to follow requires i.) The study of the AI models on which splitting, skipping, and early-exiting layers can be considered. Most of the models without memory can be considered. ii.) The design of new AI model offloading strategies based on model splitting allows dynamic model execution for the inference phase.
  2. Leveraging Vehicular Computing to increase Edge and Cloud computation capabilities
    • Intelligent Transport System is open to new applications and services leveraging interaction opportunities between vehicles and consumers. Among them is the possibility to use computation and/or connectivity resources offered by nearby intelligent vehicles to execute tasks from third-party devices, extending thus the existing Edge-Cloud ecosystem. We bring solutions to vehicle-device sharing capability while dealing with the uncertainties and heterogeneity of end-user behaviors and vehicle resources. This project aims to incorporate future highly capable autonomous vehicles as essential actors in the future computing ecosystem. The proposal of innovating solutions to distribute users’ computing requirements and optimize energy resources at the Edge level. One of the particularities of this project is also to consider the mobility of mobile users and vehicles in the resource allocation process.

Internships

Comments are closed.