Uncertainty Quantification for Inverse Problems with Sparsity-Promoting Implicit Priors

Research output: Book/ReportPh.D. thesis

16 Downloads (Pure)

Abstract

Uncertainty quantification for inverse problems studies the estimation of parameters from corrupted, indirect observations and the uncertainty in these estimations. A common approach for performing uncertainty quantification is by means of Bayes’ formula, which provides us with a posterior probability distribution that combines knowledge of the observations with our prior knowledge of the parameters. In reality, parameters can often be approximated well on low-dimensional sets resulting in sparse approximations. Such information can be incorporated in the posterior by using sparsity-promoting priors.

In the case that the posterior is a Gaussian distribution, sampling can be done efficiently by means of solving randomized linear least squares problems. Instead of explicitly incorporating additional information in the prior, we can choose to implicitly add information by modifying the randomized linear least squares problem. Inspired by the addition of nonnegativity constraints, we proposed the addition of more general constraints and sparsity-promoting regularization. The resulting probability distribution, which we call the regularized Gaussian distribution, lies at the core of this thesis.

This thesis starts with a short introduction to inverse problems, uncertainty quantification and sparsity. Then, we provide the necessary background on sparsity-promoting regularization in variational inverse problems and sparsity-promoting priors in Bayesian inverse problems.

We discuss theoretical properties and applications of the regularized Gaussian distribution. Specifically, we focus on how sparsity is promoted when the constraints are polyhedral sets and the regularization is convex piecewise linear. We then apply these sparsity-promoting properties to Bayesian linear inverse problems. This results in hierarchical models with efficient sampling algorithms under the assumption that the epigraph of the regularization function is a polyhedral cone.

To make these hierarchical models more flexible and easily applicable, we go beyond this conic assumption by adding more implicit assumptions. We present a proof-of-concept implementation of these hierarchical models in the CUQIpy software package for computational uncertainty quantification for inverse problems in Python. This code provides easily accessible tools for using the regularized Gaussian distribution in solving Bayesian linear inverse problems.

We also further study theoretical properties of linear regression with convex piecewise linear regularization. We study the geometry of the well-posedness of these optimization problems, that is, the existence, uniqueness and continuity of the solution map with respect to the data. In particular, we discuss the combinatorial nature of the well-posedness and show that well-posedness is computationally difficult to verify for these problems.
Original languageEnglish
PublisherTechnical University of Denmark
Number of pages174
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Uncertainty Quantification for Inverse Problems with Sparsity-Promoting Implicit Priors'. Together they form a unique fingerprint.

Cite this