As the demand for sophisticated language models (LMs) continues to grow, the necessity to deploy them efficiently across federated and edge environments becomes increasingly evident. This survey explores the nuanced interplay between federated and edge learning for large language models (LLMs), considering the evolving landscape of distributed computing. We investigate how federated learning paradigms can be tailored to accommodate the unique characteristics of LMs, ensuring collaborative model training while respecting privacy constraints inherent in federated environments. Additionally, we scrutinize the challenges posed by resource constraints at the edge, reporting on relevant literature and established techniques within the realm of LLMs for edge deployments, such as model pruning or model quantization. The future holds the potential for LMs to leverage the collective intelligence of distributed networks while respecting the autonomy and privacy of individual edge devices. Through this survey, the objective is to provide an in-depth analysis of the current state of efficient and privacy-aware LLM training and deployment in federated and edge environments, with the aim of offering valuable insights and guidance to researchers shaping the ongoing discussion in this field.

Federated and edge learning for large language models / Piccialli, F.; Chiaro, D.; Qi, P.; Bellandi, V.; Damiani, E.. - In: INFORMATION FUSION. - ISSN 1566-2535. - 117:(2025). [10.1016/j.inffus.2024.102840]

Federated and edge learning for large language models

Piccialli F.
;
Chiaro D.;Qi P.;
2025

Abstract

As the demand for sophisticated language models (LMs) continues to grow, the necessity to deploy them efficiently across federated and edge environments becomes increasingly evident. This survey explores the nuanced interplay between federated and edge learning for large language models (LLMs), considering the evolving landscape of distributed computing. We investigate how federated learning paradigms can be tailored to accommodate the unique characteristics of LMs, ensuring collaborative model training while respecting privacy constraints inherent in federated environments. Additionally, we scrutinize the challenges posed by resource constraints at the edge, reporting on relevant literature and established techniques within the realm of LLMs for edge deployments, such as model pruning or model quantization. The future holds the potential for LMs to leverage the collective intelligence of distributed networks while respecting the autonomy and privacy of individual edge devices. Through this survey, the objective is to provide an in-depth analysis of the current state of efficient and privacy-aware LLM training and deployment in federated and edge environments, with the aim of offering valuable insights and guidance to researchers shaping the ongoing discussion in this field.
2025
Federated and edge learning for large language models / Piccialli, F.; Chiaro, D.; Qi, P.; Bellandi, V.; Damiani, E.. - In: INFORMATION FUSION. - ISSN 1566-2535. - 117:(2025). [10.1016/j.inffus.2024.102840]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1027544
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? ND
social impact