Bayesian Leave-One-Out Cross-Validation for Large Data

Michael Riis Andersen, Mans Magnusson, Johan Jonasson, Aki Vehtari

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

116 Downloads (Pure)

Abstract

Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO-CV) is a general approach for assessing the generalizability of a model, but unfortunately, LOO-CV does not scale well to large datasets. We propose a combination of using approximate inference techniques and probabilityproportional-to-size-sampling (PPS) for fast LOOCV model evaluation for large data. We provide both theoretical and empirical results showing good properties for large data.
Original languageEnglish
Title of host publicationProceedings of the 36th International Conference on Machine Learning
PublisherInternational Machine Learning Society (IMLS)
Publication date2019
Pages7505-7525
ISBN (Print)9781510886988
Publication statusPublished - 2019
Event36th International Conference on Machine Learning - Long Beach Convention Center, Long Beach, United States
Duration: 10 Jun 201915 Jun 2019
Conference number: 36

Conference

Conference36th International Conference on Machine Learning
Number36
LocationLong Beach Convention Center
Country/TerritoryUnited States
CityLong Beach
Period10/06/201915/06/2019

Fingerprint

Dive into the research topics of 'Bayesian Leave-One-Out Cross-Validation for Large Data'. Together they form a unique fingerprint.

Cite this