Abstract
Disentangled representation learning finds compact, independent and
easy-to-interpret factors of the data. Learning such has been shown to require
an inductive bias, which we explicitly encode in a generative model of images.
Specifically, we propose a model with two latent spaces: one that represents
spatial transformations of the input data, and another that represents the
transformed data. We find that the latter naturally captures the intrinsic
appearance of the data. To realize the generative model, we propose a
Variationally Inferred Transformational Autoencoder (VITAE) that incorporates a
spatial transformer into a variational autoencoder. We show how to perform
inference in the model efficiently by carefully designing the encoders and
restricting the transformation class to be diffeomorphic. Empirically, our
model separates the visual style from digit type on MNIST, separates shape and
pose in images of human bodies and facial features from facial shape on CelebA.
Original language | English |
---|---|
Title of host publication | Proceedings of 33rd Conference on Neural Information Processing Systems |
Number of pages | 19 |
Publication date | 2019 |
Publication status | Published - 2019 |
Event | 33rd Conference on Neural Information Processing Systems - Vancouver Convention Centre, Vancouver, Canada Duration: 8 Dec 2019 → 14 Dec 2019 Conference number: 33 https://nips.cc/Conferences/2019/ |
Conference
Conference | 33rd Conference on Neural Information Processing Systems |
---|---|
Number | 33 |
Location | Vancouver Convention Centre |
Country/Territory | Canada |
City | Vancouver |
Period | 08/12/2019 → 14/12/2019 |
Internet address |