● A doctoral researcher at Orange, Tanguy Le Cloirec explains the functioning of decentralized federated learning and personalized federated learning.
● Within the framework of a project financed by the French National Agency for Research (ANR), Le Cloirec is working on decoupling AI-model parameters, which facilitates the development of high-performance, power-efficient and personalized federated learning approaches.
What is decentralized federated learning and how does it differ from other federated approaches?
Standard federated learning is an Edge-AI approach that relies on a server to aggregate updates from models that are locally trained on individual devices. In decentralized federated learning, there is no central server: participating devices communicate directly with each other via a telecommunications network. For example, in the field of self-driving cars, each individual car trains a model with its local data (imagery and sensor inputs) and only exchanges model parameters with its peers without sharing any raw data. It’s an approach that avoids the power draw associated with uploading data to the cloud, which has the added advantage of ensuring better privacy.
Personalized federated learning allows for the individual fine-tuning of upper layers in models which still share the same common base.
What is the logic of choosing decentralized federated learning over standard federated learning or a centralized cloud?
In a centralized cloud, the transmission of data to a central server comes at a high cost in terms of energy use. Standard federated learning improves that because only model weights and not model data are uploaded. Decentralization takes this process a step further by dispensing with the central server, which optimizes flexibility and generates further power savings. This is particularly the case with self-driving cars, which need to classify image data (pedestrians and other vehicles as well as weather conditions) in real time, but without overloading networks.
What are the benefits and challenges of this approach?
The main advantage is reduced power consumption: cars store their models and data locally and only communicate via lightweight updates. However, without a central server coordination may be a challenge, in particular because individual instances of the model all need to achieve similar levels of overall performance. In a decentralized network, cars in Paris don’t communicate directly with cars in Rennes. It takes more time for information to propagate across the network, which can have a negative impact on quality.
You are working on the ANR-funded TREES project, which focuses on personalized federated learning. Can you tell us about that?
In standard federated learning, device data is often highly diverse. Cars in Marseille, for example, don’t encounter the same conditions as cars in Canada (snow, rain, etc.). This heterogeneity means that whereas each device learns to cope with local tasks, the global model might struggle with new data — a phenomenon referred to as a generalization gap in machine learning. Personalized federated learning (PFL) addresses this issue by fine-tuning upper layers of local models, which still benefit from common base layers. This is the focus of research in the TREES project, where we are trying to improve the energy efficiency of networks in the context of new uses for distributed AI. In real-world applications, data will be heterogeneously distributed across client networks. It follows that implementing personalized federated learning solutions will enable us to mitigate the harmful impact of heterogeneous data inherent in the deployment of these AIs, while also improving the energy efficiency of networks.
How does personalized federated learning work in practice? And what is the role of parameter decoupling?
Personalized federated learning aims to adjust the training or architecture of models to enable them to take charge of specific tasks for each client, while still maintaining the central principle of federated learning, which is to ensure that each participant benefits from collective knowledge accumulated by its peers. For example, in parameter decoupling, we split AI models into two parts: a shared base that extracts generic features (for example in cars, it recognizes roads and pedestrians), and private layers that adapt to specific tasks (recognizing snow for cars in Canada, and buildings for cars in cities). The goal of dividing models in this way is to reconcile overall performance with local adaptation. For example, to distinguish between images of cities and countryside, the shared base learns to recognize common features (roads, sky, signage), whereas the private layers fine-tune classification to take into account local context (greenery in the countryside and buildings in cities). The challenge is to determine how both parts should be trained: together, separately, or in distinct stages. The goal is to achieve the right balance and to avoid creating a model that is overly specialized and insufficiently generalized.
Personalized federated learning is suitable for data where privacy is an important consideration …
Yes, in healthcare for example, smart watches can be used to analyse medical data which is only shared in the form of model parameters. Solutions of this kind allow models to collectively improve algorithms while maintaining adequate protection for privacy.
Tanguy Le Cloirec






