Robustness Analytics to Data Heterogeneity in Edge Computing

Jia Qian, Lars Kai Hansen, Xenofon Fafoutis, Prayag Tiwari, Hari Mohan Pandey

Research output: Contribution to journalJournal articleResearchpeer-review

9 Downloads (Pure)


Federated Learning is a framework that jointly trains a model with complete knowledge on a remotely placed centralized server, but without the requirement of accessing the data stored in distributed machines. Some work assumes that the data generated from edge devices are identically and independently sampled from a common population distribution. However, such ideal sampling may not be realistic in many contexts. Also, models based on intrinsic agency, such as active sampling schemes, may lead to highly biased sampling. So an imminent question is how robust Federated Learning is to biased sampling? In this work, we experimentally investigate two such scenarios. First, we study a centralized classifier aggregated from a collection of local classifiers trained with data having categorical heterogeneity. Second, we study a classifier aggregated from a collection of local classifiers trained by data through active sampling at the edge. We present evidence in both scenarios that Federated Learning is robust to data heterogeneity when local training iterations and communication frequency are appropriately chosen.
Original languageEnglish
JournalComputer Communications
Pages (from-to)229-239
Publication statusPublished - 2020


  • Intelligent Edge Computing
  • Fog Computing
  • Active Learning
  • Federated Learning
  • Distributed Machine Learning
  • User Data Privacy


Dive into the research topics of 'Robustness Analytics to Data Heterogeneity in Edge Computing'. Together they form a unique fingerprint.

Cite this