Thesis defense

Angelo Rodio, Phd Student in NEO team and supervised by Giovanni NEGLIA and Alain JEAN-MARIE, has defended his thesis on Wednesday, July 3rd, in Euler violet Room, from 2:00 pm. Congratulations Angelo!

Thesis title: Client Heterogeneity in Federated Learning Systems

Abstract:

Federated Learning (FL) is a collaborative framework where clients, such as smartphones, train machine learning models without sharing local data. This thesis explores the challenges of client heterogeneity in FL systems—arising from variations in local data to differences in device capabilities and network conditions—and their effects on machine learning model training. We propose practical algorithms to enhance learning performance and optimize resource use.

Our first contribution addresses the challenge of heterogeneous and correlated client participation over time and geographic areas. We analyze FL algorithm performance under Markovian client participation assumption and introduce the first Correlation-Aware FL algorithm (CaFed), designed to accelerate convergence in such environments.

The second contribution tackles variability in the learning process due to heterogeneous client participation. Existing variance reduction methods fail to aggregate client updates with varying staleness. We propose a Staleness-Aware FL algorithm (FedStale), which effectively aggregates both fresh and stale updates, improving performance across many heterogeneous environments.

Our third contribution addresses the impact of network resource heterogeneity, where diverse communication channels degrade training performance. Our Packet Loss-Aware FL algorithm (LossyFL) provides a faster, cost-effective alternative to retransmissions and error correction. It achieves performance close to ideal lossless conditions within a few additional communication rounds.

The final contribution addresses hardware heterogeneity among clients, from end-devices to edge servers and cloud infrastructures, and the challenge of jointly training models of different sizes across these devices. Our Inference-Aware FL algorithm for Cooperative Inference Systems (FedCIS) allows more powerful devices to aid less capable ones during training, optimizing model performance across heterogeneous hardware.

The thesis concludes with reflections on the open challenges and outlines directions for future research.