Federated Learning (FL) is a widely adopted paradigm that enables collaborative model training while preserving data privacy. As concerns around data poisoning and the “right to be forgotten” continue to grow, federated unlearning, which is the ability to remove the influence of specific training data from a trained FL model, has become increasingly critical. However, existing unlearning methods often require expensive retraining or fail to achieve good forgetting effects, limiting their practicality in real-world FL systems. In this work, we propose FuGuard, a dual-strategy federated unlearning framework, designed for efficient and ideal client-level data removal. FuGuard combines the generative surrogate, which approximates the contribution of the target client, with optimal transport regularization that softly constrains model parameter drift during unlearning. This approach effectively removes the influence of the target client while preserving the stability and performance of the global model. To evaluate the forgetting capability, we conduct testing using backdoor attacks and member inference attacks (MIA) for residual data influence. Empirical results on different benchmarks demonstrate that FuGuard significantly reduces the impact of the target client’s data while maintaining the performance of non-target clients, consistently outperforming state-of-the-art baselines in both forgetting effectiveness and accuracy retention. Our code is accessible at: \url{https://anonymous.4open.science/r/FuGuard-0263}.

FuGuard: Client-Level Federated Unlearning via Generative Surrogates and Optimal Transport / Qi, Pian; Annunziata, Daniela; Jappelli, Chiara; Giampaolo, Fabio; Piccialli, Francesco. - (2025).

FuGuard: Client-Level Federated Unlearning via Generative Surrogates and Optimal Transport

Pian Qi;Daniela Annunziata;Fabio Giampaolo;Francesco Piccialli
2025

Abstract

Federated Learning (FL) is a widely adopted paradigm that enables collaborative model training while preserving data privacy. As concerns around data poisoning and the “right to be forgotten” continue to grow, federated unlearning, which is the ability to remove the influence of specific training data from a trained FL model, has become increasingly critical. However, existing unlearning methods often require expensive retraining or fail to achieve good forgetting effects, limiting their practicality in real-world FL systems. In this work, we propose FuGuard, a dual-strategy federated unlearning framework, designed for efficient and ideal client-level data removal. FuGuard combines the generative surrogate, which approximates the contribution of the target client, with optimal transport regularization that softly constrains model parameter drift during unlearning. This approach effectively removes the influence of the target client while preserving the stability and performance of the global model. To evaluate the forgetting capability, we conduct testing using backdoor attacks and member inference attacks (MIA) for residual data influence. Empirical results on different benchmarks demonstrate that FuGuard significantly reduces the impact of the target client’s data while maintaining the performance of non-target clients, consistently outperforming state-of-the-art baselines in both forgetting effectiveness and accuracy retention. Our code is accessible at: \url{https://anonymous.4open.science/r/FuGuard-0263}.
2025
FuGuard: Client-Level Federated Unlearning via Generative Surrogates and Optimal Transport / Qi, Pian; Annunziata, Daniela; Jappelli, Chiara; Giampaolo, Fabio; Piccialli, Francesco. - (2025).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1028327
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact