Autoencoding beyond pixels using a learned similarity metric

Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, Ole Winther

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

413 Downloads (Pure)


We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder (VAE) with a generative adversarial network (GAN) we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.
Original languageEnglish
Title of host publicationProceedings of the 33rd International Conference on Machine Learning (ICML 2016)
Number of pages9
Publication date2016
Publication statusPublished - 2016
Event33rd International Conference on Machine Learning (ICML 2016) - New York, United States
Duration: 19 Jun 201624 Jun 2016
Conference number: 33


Conference33rd International Conference on Machine Learning (ICML 2016)
Country/TerritoryUnited States
CityNew York
Internet address
SeriesJMLR: Workshop and Conference Proceedings


Dive into the research topics of 'Autoencoding beyond pixels using a learned similarity metric'. Together they form a unique fingerprint.

Cite this