Synthetic data shuffling accelerates the convergence of federated learning under data heterogeneity

Bo Li, Yasin Esfandiari, Mikkel N. Schmidt, Tommy Sonne Alstrøm, Sebastian U. Stich

Research output: Contribution to journalJournal articleResearchpeer-review

25 Downloads (Pure)

Abstract

In federated learning, data heterogeneity is a critical challenge. A straightforward solution is to shuffle the clients’ data to homogenize the distribution. However, this may violate data access rights, and how and when shuffling can accelerate the convergence of a federated optimization algorithm is not theoretically well understood. In this paper, we establish a precise and quantifiable correspondence between data heterogeneity and parameters in the convergence rate when a fraction of data is shuffled across clients. We discuss that shuffling can in some cases quadratically reduce the gradient dissimilarity with respect to the shuffling percentage, accelerating convergence. Inspired by the theory, we propose a
practical approach that addresses the data access rights issue by shuffling locally generated synthetic data. The experimental results show that shuffling synthetic data improves the performance of multiple existing federated learning algorithms by a large margin.
Original languageEnglish
JournalTransactions on Machine Learning Research
Number of pages26
ISSN2835-8856
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Synthetic data shuffling accelerates the convergence of federated learning under data heterogeneity'. Together they form a unique fingerprint.

Cite this