Adaptive Cholesky Gaussian Processes

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

14 Downloads (Pure)


We present a method to approximate Gaussian process regression models to large datasets by considering only a subset of the data. Our approach is novel in that the size of the subset is selected on the fly during exact inference with little computational overhead. From an empirical observation that the log-marginal likelihood often exhibits a linear trend once a sufficient subset of a dataset has been observed, we conclude that many large datasets contain redundant information that only slightly affects the posterior. Based on this, we provide probabilistic bounds on the full model evidence that can identify such subsets. Remarkably, these bounds are largely composed of terms that appear in intermediate steps of the standard Cholesky decomposition, allowing us to modify the algorithm to adaptively stop the decomposition once enough data have been observed.
Original languageEnglish
Title of host publicationProceedings of the 26th International Conference on Artificial Intelligence and Statistics
PublisherProceedings of Machine Learning Research
Publication date2023
Publication statusPublished - 2023
Event26th International Conference on Artificial Intelligence and Statistics - Valencia, Spain
Duration: 25 Apr 202327 Apr 2023
Conference number: 26


Conference26th International Conference on Artificial Intelligence and Statistics
SeriesProceedings of Machine Learning Research


Dive into the research topics of 'Adaptive Cholesky Gaussian Processes'. Together they form a unique fingerprint.

Cite this