Abstract
Federated learning enables multi-participant joint modeling with distributed and localized training, thus effectively overcoming the problems of data island and privacy protection. However, existing federated learning frameworks have proven to be vulnerable to backdoor attacks, where attackers embed backdoor triggers into local models during the training phase. These triggers will be activated by crafted inputs during the prediction phase, leading to misclassification targeted by attackers. To address these issues, existing defense methods focus on both backdoor detection and backdoor erasing. However, passive backdoor detection methods cannot eliminate the effect of embedded backdoor patterns, while backdoor erasing may degenerate the model performance and cause extra computation overhead. This paper proposes ADFL, a novel adversarial distillation-based backdoor defense scheme for federated learning. ADFL generates fake samples containing backdoor features by deploying a generative adversarial network (GAN) on the server side and relabeling the fake samples to obtain the distillation dataset. Then, taking the labeled samples as inputs, knowledge distillation which employs the clean model as a teacher and the global model as a student is implemented to revise the global model and eliminate the influence of backdoored Neurons in it, thereby effectively defending against backdoor attacks while maintaining the model performance. Experimental results show that ADFL can lower the attack success rates by 95% while maintaining the main task accuracy above 90%.
Original language | English |
---|---|
Article number | 103366 |
Journal | Computers and Security |
Volume | 132 |
Number of pages | 14 |
ISSN | 0167-4048 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- Backdoor attack
- Backdoor defense
- Federated learning
- Generative adversarial network
- Knowledge distillation