Abstract
Established methods for unsupervised representation learning such as
variational autoencoders produce none or poorly calibrated uncertainty
estimates making it difficult to evaluate if learned representations are stable
and reliable. In this work, we present a Bayesian autoencoder for unsupervised
representation learning, which is trained using a novel variational lower-bound
of the autoencoder evidence. This is maximized using Monte Carlo EM with a
variational distribution that takes the shape of a Laplace approximation. We
develop a new Hessian approximation that scales linearly with data size
allowing us to model high-dimensional data. Empirically, we show that our
Laplacian autoencoder estimates well-calibrated uncertainties in both latent
and output space. We demonstrate that this results in improved performance
across a multitude of downstream tasks.
Original language | English |
---|---|
Title of host publication | Proceedings of 36th Conference on Neural Information Processing Systems |
Number of pages | 27 |
Publication date | 2022 |
Publication status | Published - 2022 |
Event | 2022 Conference on Neural Information Processing Systems - New Orleans Ernest N. Morial Convention Center, New Orleans, United States Duration: 28 Nov 2022 → 9 Dec 2022 |
Conference
Conference | 2022 Conference on Neural Information Processing Systems |
---|---|
Location | New Orleans Ernest N. Morial Convention Center |
Country/Territory | United States |
City | New Orleans |
Period | 28/11/2022 → 09/12/2022 |