Explicit Disentanglement of Appearance and Perspective in Generative Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

99 Downloads (Orbit)

Abstract

Disentangled representation learning finds compact, independent and easy-to-interpret factors of the data. Learning such has been shown to require an inductive bias, which we explicitly encode in a generative model of images. Specifically, we propose a model with two latent spaces: one that represents spatial transformations of the input data, and another that represents the transformed data. We find that the latter naturally captures the intrinsic appearance of the data. To realize the generative model, we propose a Variationally Inferred Transformational Autoencoder (VITAE) that incorporates a spatial transformer into a variational autoencoder. We show how to perform inference in the model efficiently by carefully designing the encoders and restricting the transformation class to be diffeomorphic. Empirically, our model separates the visual style from digit type on MNIST, separates shape and pose in images of human bodies and facial features from facial shape on CelebA.
Original languageEnglish
Title of host publicationProceedings of 33rd Conference on Neural Information Processing Systems
Number of pages19
Publication date2019
Publication statusPublished - 2019
Event33rd Conference on Neural Information Processing Systems - Vancouver Convention Centre, Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019
Conference number: 33
https://nips.cc/Conferences/2019/

Conference

Conference33rd Conference on Neural Information Processing Systems
Number33
LocationVancouver Convention Centre
Country/TerritoryCanada
CityVancouver
Period08/12/201914/12/2019
Internet address

Fingerprint

Dive into the research topics of 'Explicit Disentanglement of Appearance and Perspective in Generative Models'. Together they form a unique fingerprint.

Cite this