One-shot federated learning (OFL) achieves efficient distributed training through single-round collaboration. While current approaches (knowledge distillation, parameter averaging) require computationally expensive auxiliary components, they fail to isolate the effects of representation quality versus aggregation methodology. A promising new train-free paradigm leverages Pre-trained models (PMs) and statistical aggregation to bridge this gap. This paper systematically investigates how representation learning from PMs and strategic statistical aggregation work together to enable effective OFL. Through extensive analysis and comprehensive empirical validation, we demonstrate that PMs provide robust foundational representations that determine the performance ceiling, while statistical methods ensure effective deployment across heterogeneous federated settings by capturing essential distributional properties. These features align diverse client distributions into coherent semantic spaces, enabling high performance while preserving privacy and minimizing communication overhead.
Deciphering One-Shot Federated Learning: The Pivotal Role of Pretrained Models / Qiu, L.; Annunziata, D.; Giampaolo, F.; Piccialli, F.. - (2025), pp. 16-22. ( 2025 Federated Learning and Edge AI for Privacy and Mobility, FLEdge-AI 2025 chn 2025) [10.1145/3737899.3768517].
Deciphering One-Shot Federated Learning: The Pivotal Role of Pretrained Models
Qiu L.;Annunziata D.;Giampaolo F.;Piccialli F.
2025
Abstract
One-shot federated learning (OFL) achieves efficient distributed training through single-round collaboration. While current approaches (knowledge distillation, parameter averaging) require computationally expensive auxiliary components, they fail to isolate the effects of representation quality versus aggregation methodology. A promising new train-free paradigm leverages Pre-trained models (PMs) and statistical aggregation to bridge this gap. This paper systematically investigates how representation learning from PMs and strategic statistical aggregation work together to enable effective OFL. Through extensive analysis and comprehensive empirical validation, we demonstrate that PMs provide robust foundational representations that determine the performance ceiling, while statistical methods ensure effective deployment across heterogeneous federated settings by capturing essential distributional properties. These features align diverse client distributions into coherent semantic spaces, enabling high performance while preserving privacy and minimizing communication overhead.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


