### Abstract

Original language | English |
---|---|

Title of host publication | Proceedings of the 2000 IEEE Signal Processing Society Workshop |

Volume | 1 |

Place of Publication | Sydney, NSW |

Publisher | IEEE |

Publication date | 2000 |

Pages | 221-230 |

ISBN (Print) | 0-7803-6278-0 |

DOIs | |

Publication status | Published - 2000 |

Event | Neural Networks for Signal Processing X - Sydney, NSW Duration: 1 Jan 2000 → … Conference number: 10 |

### Conference

Conference | Neural Networks for Signal Processing X |
---|---|

Number | 10 |

City | Sydney, NSW |

Period | 01/01/2000 → … |

### Bibliographical note

Copyright: 2000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE### Cite this

*Proceedings of the 2000 IEEE Signal Processing Society Workshop*(Vol. 1, pp. 221-230). Sydney, NSW: IEEE. https://doi.org/10.1109/NNSP.2000.889413

}

*Proceedings of the 2000 IEEE Signal Processing Society Workshop.*vol. 1, IEEE, Sydney, NSW, pp. 221-230, Neural Networks for Signal Processing X, Sydney, NSW, 01/01/2000. https://doi.org/10.1109/NNSP.2000.889413

**On Comparison of Adaptive Regularization Methods.** / Sigurdsson, Sigurdur; Larsen, Jan; Hansen, Lars Kai.

Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review

TY - GEN

T1 - On Comparison of Adaptive Regularization Methods

AU - Sigurdsson, Sigurdur

AU - Larsen, Jan

AU - Hansen, Lars Kai

N1 - Copyright: 2000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE

PY - 2000

Y1 - 2000

N2 - Modeling with flexible models, such as neural networks, requires careful control of the model complexity and generalization ability of the resulting model which finds expression in the ubiquitous bias-variance dilemma. Regularization is a tool for optimizing the model structure reducing variance at the expense of introducing extra bias. The overall objective of adaptive regularization is to tune the amount of regularization ensuring minimal generalization error. Regularization is a supplement to direct model selection techniques like step-wise selection and one would prefer a hybrid scheme; however, a very flexible regularization may substitute the need for selection procedures. This paper investigates recently suggested adaptive regularization schemes. Some methods focus directly on minimizing an estimate of the generalization error (either algebraic or empirical), whereas others start from different criteria, e.g., the Bayesian evidence. The evidence expresses basically the probability of the model, which is conceptually different from generalization error; however, asymptotically for large training data sets they will converge. First the basic model definition, training and generalization is presented. Next, different adaptive regularization schemes are reviewed and extended. Finally, the experimental section presents a comparative study concerning linear models for regression/time series problems.

AB - Modeling with flexible models, such as neural networks, requires careful control of the model complexity and generalization ability of the resulting model which finds expression in the ubiquitous bias-variance dilemma. Regularization is a tool for optimizing the model structure reducing variance at the expense of introducing extra bias. The overall objective of adaptive regularization is to tune the amount of regularization ensuring minimal generalization error. Regularization is a supplement to direct model selection techniques like step-wise selection and one would prefer a hybrid scheme; however, a very flexible regularization may substitute the need for selection procedures. This paper investigates recently suggested adaptive regularization schemes. Some methods focus directly on minimizing an estimate of the generalization error (either algebraic or empirical), whereas others start from different criteria, e.g., the Bayesian evidence. The evidence expresses basically the probability of the model, which is conceptually different from generalization error; however, asymptotically for large training data sets they will converge. First the basic model definition, training and generalization is presented. Next, different adaptive regularization schemes are reviewed and extended. Finally, the experimental section presents a comparative study concerning linear models for regression/time series problems.

U2 - 10.1109/NNSP.2000.889413

DO - 10.1109/NNSP.2000.889413

M3 - Article in proceedings

SN - 0-7803-6278-0

VL - 1

SP - 221

EP - 230

BT - Proceedings of the 2000 IEEE Signal Processing Society Workshop

PB - IEEE

CY - Sydney, NSW

ER -