Abstract
In many inverse problems, a nonnegativity constraint is natural. Moreover, in some cases, we expect the vector of unknown parameters to have zero components. When a Bayesian approach is taken, this motivates a desire for prior probability density (and hence posterior probability density) functions that have positive mass at the boundary of the set {x ϵ RN| x ≥0} . Unfortunately, it is difficult to define a prior with this property that yields computationally tractable inference for large-scale inverse problems. In this paper, we use nonnegativity constrained optimization to define such prior and posterior density functions when the measurement error is either Gaussian or Poisson distributed. The numerical optimization methods we use are highly efficient, and hence our approach is computationally tractable even in large-scale cases. We embed our nonnegativity constrained optimization approach within a hierarchical framework, obtaining Gibbs samplers for both Gaussian and Poisson distributed measurement cases. Finally, we test the resulting Markov chain Monte Carlo methods on examples from both image deblurring and positron emission tomography.
Original language | English |
---|---|
Journal | SIAM Journal on Scientific Computing |
Volume | 42 |
Issue number | 2 |
Pages (from-to) | A1269-A1288 |
ISSN | 1064-8275 |
DOIs | |
Publication status | Published - 1 Jan 2020 |
Keywords
- Bayesian methods
- Inverse problems
- Markov chain Monte Carlo
- Nonnegativity constraints
- Uncertainty quantification