TY - GEN
T1 - Incremental Gaussian Processes
AU - Quiñonero-Candela, Joaquin
AU - Winther, Ole
PY - 2002
Y1 - 2002
N2 - In this paper, we consider Tipping's relevance vector machine (RVM) and formalize an incremental training strategy as a variant of the expectation-maximization (EM) algorithm that we call subspace EM. Working with a subset of active basis functions, the sparsity of the RVM solution will ensure that the number of basis functions and thereby the computational complexity is kept low. We also introduce a mean field approach to the intractable classification
model that is expected to give a very good approximation to exact Bayesian inference and contains the Laplace approximation as a special case. We test the algorithms on two large data sets with 10\^3-10\^4 examples. The results indicate that Bayesian learning of large data sets, e.g. the MNIST database is realistic.
AB - In this paper, we consider Tipping's relevance vector machine (RVM) and formalize an incremental training strategy as a variant of the expectation-maximization (EM) algorithm that we call subspace EM. Working with a subset of active basis functions, the sparsity of the RVM solution will ensure that the number of basis functions and thereby the computational complexity is kept low. We also introduce a mean field approach to the intractable classification
model that is expected to give a very good approximation to exact Bayesian inference and contains the Laplace approximation as a special case. We test the algorithms on two large data sets with 10\^3-10\^4 examples. The results indicate that Bayesian learning of large data sets, e.g. the MNIST database is realistic.
KW - Incremental Methods
KW - Gaussian Processes
KW - Mean Field Classification
KW - Computational Complexity
KW - Bayesian Kernel Methods
M3 - Article in proceedings
BT - Advances in Neural Processing Systems
ER -