Abstract
Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO-CV) is a general approach for assessing the generalizability of a model, but unfortunately, LOO-CV does not scale well to large datasets. We propose a combination of using approximate inference techniques and probabilityproportional-to-size-sampling (PPS) for fast LOOCV model evaluation for large data. We provide both theoretical and empirical results showing good properties for large data.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 36th International Conference on Machine Learning |
| Publisher | International Machine Learning Society (IMLS) |
| Publication date | 2019 |
| Pages | 7505-7525 |
| ISBN (Print) | 9781510886988 |
| Publication status | Published - 2019 |
| Event | 36th International Conference on Machine Learning - Long Beach Convention Center, Long Beach, United States Duration: 10 Jun 2019 → 15 Jun 2019 Conference number: 36 |
Conference
| Conference | 36th International Conference on Machine Learning |
|---|---|
| Number | 36 |
| Location | Long Beach Convention Center |
| Country/Territory | United States |
| City | Long Beach |
| Period | 10/06/2019 → 15/06/2019 |
Fingerprint
Dive into the research topics of 'Bayesian Leave-One-Out Cross-Validation for Large Data'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver