Generalization performance of regularized neural network models

Jan Larsen, Lars Kai Hansen

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    473 Downloads (Pure)

    Abstract

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization normally improves the generalization performance by restricting the model complexity. A formula for the optimal weight decay regularizer is derived. A regularized model may be characterized by an effective number of weights (parameters); however, it is demonstrated that no simple definition is possible. A novel estimator of the average generalization error (called FPER) is suggested and compared to the final prediction error (FPE) and generalized prediction error (GPE) estimators. In addition, comparative numerical studies demonstrate the qualities of the suggested estimator
    Original languageEnglish
    Title of host publicationProceedings of the 4th IEEE Workshop Neural Networks for Signal Processing
    PublisherIEEE
    Publication date1994
    Pages42-51
    ISBN (Print)07-80-32026-3
    DOIs
    Publication statusPublished - 1994
    EventIEEE Workshop of Neural Networks for Signal Proceesing IV - Ermioni, Greece
    Duration: 6 Sep 19948 Sep 1994
    Conference number: 4th
    http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=2959

    Workshop

    WorkshopIEEE Workshop of Neural Networks for Signal Proceesing IV
    Number4th
    CountryGreece
    CityErmioni
    Period06/09/199408/09/1994
    Internet address

    Bibliographical note

    Copyright: 1994 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE

    Fingerprint Dive into the research topics of 'Generalization performance of regularized neural network models'. Together they form a unique fingerprint.

    Cite this