Cross validation in LULOO

Paul Haase Sørensen, Peter Magnus Nørgård, Lars Kai Hansen, Jan Larsen

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    Abstract

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble.
    Original languageEnglish
    Title of host publicationProceedings of International Conference on Neural Information Processing
    Publication date1996
    Publication statusPublished - 1996
    EventInternational Conference on Neural Information Processing - Hong Kong, Hong Kong
    Duration: 24 Sept 199627 Sept 1996

    Conference

    ConferenceInternational Conference on Neural Information Processing
    Country/TerritoryHong Kong
    CityHong Kong
    Period24/09/199627/09/1996

    Fingerprint

    Dive into the research topics of 'Cross validation in LULOO'. Together they form a unique fingerprint.

    Cite this