Learning regularization parameters of inverse problems via deep neural networks: Paper

Babak Maboudi Afkham, Julianne Chung, Matthias Chung*

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

251 Downloads (Pure)

Abstract

In this work, we describe a new approach that uses deep neural networks (DNN) to obtain regularization parameters for solving inverse problems. We consider a supervised learning approach, where a network is trained to approximate the mapping from observation data to regularization parameters. Once the network is trained, regularization parameters for newly obtained data are computed by efficient forward propagation of the DNN. We show that a wide variety of regularization functionals, forward models, and noise models may be considered. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. We emphasize that the key advantage of using DNNs for learning regularization parameters, compared to previous works on learning via bilevel optimization or empirical Bayes risk minimization, is greater generalizability. That is, rather than computing one set of parameters that is optimal with respect to one particular design objective, DNN-computed regularization parameters are tailored to the specific features or properties of the newly observed data. Thus, our approach may better handle cases where the observation is not a close representation of the training set. Furthermore, we avoid the need for expensive and challenging bilevel optimization methods as utilized in other existing training approaches. Numerical results demonstrate that trained DNNs can predict regularization parameters faster and better than existing methods, hence resulting in more accurate solutions to inverse problems.
Original languageEnglish
Article number105017
JournalInverse Problems
Volume37
Issue number10
Number of pages30
ISSN0266-5611
DOIs
Publication statusPublished - 2021

Keywords

  • Bilevel optimization
  • Deep learning
  • Deep neural networks
  • Hyperparameter selection
  • Optimal experimental design
  • Regularization

Fingerprint

Dive into the research topics of 'Learning regularization parameters of inverse problems via deep neural networks: Paper'. Together they form a unique fingerprint.

Cite this