Abstract
In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation. Eastwood & Williams (2018) proposed three metrics for quantifying the quality of such disentangled representations: disentanglement (D), completeness (C) and informativeness (I). In this work, we first connect this DCI framework to two common notions of linear and nonlinear identifiability, thereby establishing a formal link between disentanglement and the closely-related field of independent component analysis. We then propose an extended DCI-ES framework with two new measures of representation quality-explicitness (E) and size (S)-and point out how D and C can be computed for black-box predictors. Our main idea is that the functional capacity required to use a representation is an important but thus-far neglected aspect of representation quality, which we quantify using explicitness or ease-of-use (E). We illustrate the relevance of our extensions on the MPI3D and Cars3D datasets.
Original language | English |
---|---|
Title of host publication | Proceedings of the The Eleventh International Conference on Learning Representations, ICLR 2023 |
Number of pages | 16 |
Publication date | 2023 |
Publication status | Published - 2023 |
Event | The Eleventh International Conference on Learning Representations - Kigali, Rwanda Duration: 1 May 2023 → 5 May 2023 |
Conference
Conference | The Eleventh International Conference on Learning Representations |
---|---|
Country/Territory | Rwanda |
City | Kigali |
Period | 01/05/2023 → 05/05/2023 |