Laplacian Autoencoders for Learning Stochastic Representations

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

45 Downloads (Pure)


Established methods for unsupervised representation learning such as variational autoencoders produce none or poorly calibrated uncertainty estimates making it difficult to evaluate if learned representations are stable and reliable. In this work, we present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence. This is maximized using Monte Carlo EM with a variational distribution that takes the shape of a Laplace approximation. We develop a new Hessian approximation that scales linearly with data size allowing us to model high-dimensional data. Empirically, we show that our Laplacian autoencoder estimates well-calibrated uncertainties in both latent and output space. We demonstrate that this results in improved performance across a multitude of downstream tasks.
Original languageEnglish
Title of host publicationProceedings of 36th Conference on Neural Information Processing Systems
Number of pages27
Publication date2022
Publication statusPublished - 2022
Event2022 Conference on Neural Information Processing Systems - New Orleans Ernest N. Morial Convention Center, New Orleans, United States
Duration: 28 Nov 20229 Dec 2022


Conference2022 Conference on Neural Information Processing Systems
LocationNew Orleans Ernest N. Morial Convention Center
Country/TerritoryUnited States
CityNew Orleans


Dive into the research topics of 'Laplacian Autoencoders for Learning Stochastic Representations'. Together they form a unique fingerprint.

Cite this