Federated Learning (FL) is a method for training Machine Learning (ML) models across various clients while maintaining data privacy. Addressing the challenge of client resource diversity, this paper presents a novel approach combining the Forward-Forward (FF) algorithm with Back Propagation (BP). This integration forms a blockwise network structure that achieves robust convergence without the chain rule, dividing the model into subnetworks for efficient training. The strategy allows dynamic allocation of network segments to clients based on their computational resources, enabling independent optimization of subnetworks, thus preventing delays and memory issues. Experiments in IID and non-IID settings across datasets assess the methodology's viability, focusing on the impact of data and label distribution on convergence. The study also examines weight aggregation and regularization techniques like FedAvg and FedProx, adapting them to understand their effect on this FL approach. Source code available at: https://github.com/MODALUNINA/SHELOB_FFL
SHELOB-FFL: addressing Systems HEterogeneity with LOcally Backpropagated Forward-Forward Learning / Izzo, S.; Giampaolo, F.; Chiaro, D.; Piccialli, F.. - (2024), pp. 23-28. (Intervento presentato al convegno 44th IEEE International Conference on Distributed Computing Systems Workshops, ICDCSW 2024 tenutosi a usa nel 2024) [10.1109/ICDCSW63686.2024.00010].
SHELOB-FFL: addressing Systems HEterogeneity with LOcally Backpropagated Forward-Forward Learning
Izzo S.;Giampaolo F.;Chiaro D.;Piccialli F.
2024
Abstract
Federated Learning (FL) is a method for training Machine Learning (ML) models across various clients while maintaining data privacy. Addressing the challenge of client resource diversity, this paper presents a novel approach combining the Forward-Forward (FF) algorithm with Back Propagation (BP). This integration forms a blockwise network structure that achieves robust convergence without the chain rule, dividing the model into subnetworks for efficient training. The strategy allows dynamic allocation of network segments to clients based on their computational resources, enabling independent optimization of subnetworks, thus preventing delays and memory issues. Experiments in IID and non-IID settings across datasets assess the methodology's viability, focusing on the impact of data and label distribution on convergence. The study also examines weight aggregation and regularization techniques like FedAvg and FedProx, adapting them to understand their effect on this FL approach. Source code available at: https://github.com/MODALUNINA/SHELOB_FFLI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.