Abstract
We propose a new approximation method for Gaussian process (GP) learning for large data sets that combines inline active set selection with hyperparameter optimization. The predictive probability of the label is used for ranking the data points. We use the leave-one-out predictive probability available in GPs to make a common ranking for both active and inactive points, allowing points to be removed again from the active set. This is important for keeping the complexity down and at the same time focusing on points close to the decision boundary. We lend both theoretical and empirical support to the active set selection strategy and marginal likelihood optimization on the active set. We make extensive tests on the USPS and MNIST digit classification databases with and without incorporating invariances, demonstrating that we can get state-of-the-art results (e.g.0.86% error on MNIST) with reasonable time complexity.
Original language | English |
---|---|
Title of host publication | IEEE International Workshop on Machine Learning for Signal Processing 2010 (MLSP 2010) |
Publisher | IEEE |
Publication date | 2010 |
Pages | 148-153 |
ISBN (Print) | 978-1-4244-7875-0 |
DOIs | |
Publication status | Published - 2010 |
Event | 2010 IEEE International Workshop on Machine Learning for Signal Processing - Kittilä, Finland Duration: 29 Aug 2010 → 1 Sept 2010 Conference number: 20 https://ieeexplore.ieee.org/xpl/conhome/5576742/proceeding |
Workshop
Workshop | 2010 IEEE International Workshop on Machine Learning for Signal Processing |
---|---|
Number | 20 |
Country/Territory | Finland |
City | Kittilä |
Period | 29/08/2010 → 01/09/2010 |
Internet address |