Latent space oddity: On the curvature of deep generative models

Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

123 Downloads (Pure)


Deep generative models provide a systematic way to learn nonlinear data distributions through a set of latent variables and a nonlinear “generator” function that maps latent points into the input space. The nonlinearity of the generator implies that the latent space gives a distorted view of the input space. Under mild conditions, we show that this distortion can be characterized by a stochastic Riemannian metric, and we demonstrate that distances and interpolants are significantly improved under this metric. This in turn improves probability distributions, sampling algorithms and clustering in the latent space. Our geometric analysis further reveals that current generators provide poor variance estimates and we propose a new generator architecture with vastly improved variance estimates. Results are demonstrated on convolutional and fully connected variational autoencoders, but the formalism easily generalizes to other deep generative models.

Original languageEnglish
Title of host publicationProceedings of International Conference on Learning Representations
Number of pages15
Publication date1 Jan 2018
Publication statusPublished - 1 Jan 2018
Event6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada
Duration: 30 Apr 20183 May 2018


Conference6th International Conference on Learning Representations, ICLR 2018

Fingerprint Dive into the research topics of 'Latent space oddity: On the curvature of deep generative models'. Together they form a unique fingerprint.

Cite this