Revisiting Active Sets for Gaussian Process Decoders

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

73 Downloads (Pure)

Abstract

Decoders built on Gaussian processes (GPs) are enticing due to the marginalisation over the non-linear function space. Such models (also known as GP-LVMs) are often expensive and notoriously difficult to train in practice, but can be scaled using variational inference and inducing points. In this paper, we revisit active set approximations. We develop a new stochastic estimate of the log-marginal likelihood based on recently discovered links to cross-validation, and propose a computationally efficient approximation thereof. We demonstrate that the resulting stochastic active sets (SAS) approximation significantly improves the robustness of GP decoder training while reducing computational cost. The SAS-GP obtains more structure in the latent space, scales to many datapoints and learns better representations than variational autoencoders, which is rarely the case for GP decoders.
Original languageEnglish
Title of host publicationProceedings of 36th Conference on Neural Information Processing Systems
Number of pages16
Publication date2022
Publication statusPublished - 2022
Event2022 Conference on Neural Information Processing Systems - New Orleans Ernest N. Morial Convention Center, New Orleans, United States
Duration: 28 Nov 20229 Dec 2022

Conference

Conference2022 Conference on Neural Information Processing Systems
LocationNew Orleans Ernest N. Morial Convention Center
Country/TerritoryUnited States
CityNew Orleans
Period28/11/202209/12/2022

Fingerprint

Dive into the research topics of 'Revisiting Active Sets for Gaussian Process Decoders'. Together they form a unique fingerprint.

Cite this