Decoder ensembling for learned latent geometries

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

9 Downloads (Pure)

Abstract

Latent space geometry provides a rigorous and empirically valuable framework for interacting with the latent variables of deep generative models. This approach reinterprets Euclidean latent spaces as Riemannian through a pull-back metric, allowing for a standard differential geometric analysis of the latent space. Unfortunately, data manifolds are generally compact and easily disconnected or filled with holes, suggesting a topological mismatch to the Euclidean latent space. The most established solution to this mismatch is to let uncertainty be a proxy for topology, but in neural network models, this is often realized through crude heuristics that lack principle and generally do not scale to high-dimensional representations. We propose using ensembles of decoders to capture model uncertainty and show how to easily compute geodesics on the associated expected manifold. Empirically, we find this simple and reliable, thereby coming one step closer to easy-to-use latent geometries.
Original languageEnglish
Title of host publicationProceedings of the ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling
Number of pages9
Publication statusAccepted/In press - 2025
EventICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling - Vienna, Austria
Duration: 29 Jul 202429 Jul 2024

Workshop

WorkshopICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling
Country/TerritoryAustria
CityVienna
Period29/07/202429/07/2024

Fingerprint

Dive into the research topics of 'Decoder ensembling for learned latent geometries'. Together they form a unique fingerprint.

Cite this