Abstract
The leave-one-out cross-validation scheme for generalization
assessment of neural network models is computationally expensive
due to replicated training sessions. Linear unlearning of examples
has recently been suggested as an approach to approximative
cross-validation. Here we briefly review the linear unlearning
scheme, dubbed LULOO, and we illustrate it on a
systemidentification example. Further, we address the possibility
of extracting confidence information (error bars) from the LULOO
ensemble.
Original language | English |
---|---|
Title of host publication | Proceedings of International Conference on Neural Information Processing |
Publication date | 1996 |
Publication status | Published - 1996 |
Event | International Conference on Neural Information Processing - Hong Kong Duration: 1 Jan 1996 → … |
Conference
Conference | International Conference on Neural Information Processing |
---|---|
City | Hong Kong |
Period | 01/01/1996 → … |