On Disentangled Representations Learned from Correlated Data

Frederik Träuble, Elliot Creager, Niki Kilbertus, Francesco Locatello, Andrea Dittadi, Anirudh Goyal, Bernhard Scholkopf, Stefan Bauer

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

131 Downloads (Pure)

Abstract

The focus of disentanglement approaches has been on identifying independent factors of variation in data. However, the causal variables underlying real-world observations are often not statistically independent. In this work, we bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data in a large-scale empirical study (including 4260 models). We show and quantify that systematically induced correlations in the dataset are being learned and reflected in the latent representations, which has implications for downstream applications of disentanglement such as fairness. We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a
pre-trained model with a small number of labels.
Original languageEnglish
Title of host publicationProceedings of the 38th International Conference on Machine Learning
Number of pages12
PublisherInternational Machine Learning Society (IMLS)
Publication date2021
Publication statusPublished - 2021
Event38th International Conference on Machine Learning - Virtual event
Duration: 18 Jul 202124 Jul 2021
Conference number: 38
https://icml.cc/Conferences/2021

Conference

Conference38th International Conference on Machine Learning
Number38
LocationVirtual event
Period18/07/202124/07/2021
Internet address
SeriesProceedings of Machine Learning Research
Volume139
ISSN2640-3498

Fingerprint

Dive into the research topics of 'On Disentangled Representations Learned from Correlated Data'. Together they form a unique fingerprint.

Cite this