Cross validation in LULOO

Paul Haase Sørensen, Peter Magnus Nørgård, Lars Kai Hansen, Jan Larsen

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    Abstract

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble.
    Original languageEnglish
    Title of host publicationProceedings of International Conference on Neural Information Processing
    Publication date1996
    Publication statusPublished - 1996
    EventInternational Conference on Neural Information Processing - Hong Kong
    Duration: 1 Jan 1996 → …

    Conference

    ConferenceInternational Conference on Neural Information Processing
    CityHong Kong
    Period01/01/1996 → …

    Fingerprint

    Dive into the research topics of 'Cross validation in LULOO'. Together they form a unique fingerprint.

    Cite this