ADFL: Defending backdoor attacks in federated learning via adversarial distillation

Chengcheng Zhu, Jiale Zhang*, Xiaobing Sun, Bing Chen, Weizhi Meng

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Federated learning enables multi-participant joint modeling with distributed and localized training, thus effectively overcoming the problems of data island and privacy protection. However, existing federated learning frameworks have proven to be vulnerable to backdoor attacks, where attackers embed backdoor triggers into local models during the training phase. These triggers will be activated by crafted inputs during the prediction phase, leading to misclassification targeted by attackers. To address these issues, existing defense methods focus on both backdoor detection and backdoor erasing. However, passive backdoor detection methods cannot eliminate the effect of embedded backdoor patterns, while backdoor erasing may degenerate the model performance and cause extra computation overhead. This paper proposes ADFL, a novel adversarial distillation-based backdoor defense scheme for federated learning. ADFL generates fake samples containing backdoor features by deploying a generative adversarial network (GAN) on the server side and relabeling the fake samples to obtain the distillation dataset. Then, taking the labeled samples as inputs, knowledge distillation which employs the clean model as a teacher and the global model as a student is implemented to revise the global model and eliminate the influence of backdoored Neurons in it, thereby effectively defending against backdoor attacks while maintaining the model performance. Experimental results show that ADFL can lower the attack success rates by 95% while maintaining the main task accuracy above 90%.

Original languageEnglish
Article number103366
JournalComputers and Security
Volume132
Number of pages14
ISSN0167-4048
DOIs
Publication statusPublished - 2023

Keywords

  • Backdoor attack
  • Backdoor defense
  • Federated learning
  • Generative adversarial network
  • Knowledge distillation

Fingerprint

Dive into the research topics of 'ADFL: Defending backdoor attacks in federated learning via adversarial distillation'. Together they form a unique fingerprint.

Cite this