Project Details
Layman's description
This project aims to develop new sampling methods for the numerical solution of inverse problems in the Bayesian framework. The sampling methods will rely on modern analysis, specifically the theory of pseudo-differential calculi, and will enable the inclusion of sophisticated a priori information on the desired type of samples in infinite-dimensional function spaces. This sampling conditioned on prior information, which in itself can be subject to uncertainty, will improve the sampling efficiency and allow uncertainty quantification of difficult inverse problems.
An ’inverse problem’ is the challenge of determining the input/cause of a given or measured output/effect. Inverse problems arise in various fields including physics, engineering, biology, and finance, where researchers attempt to infer the underlying causes or processes that generate a particular set of measurements or data. Inverse problems are often challenging because they are typically ill-posed: for example, there may be multiple possible causes or inputs that all lead to the same output, and the observed data may be noisy or incomplete. The situation is further complicated by uncertainties related to the system or model that transforms the input into the output, as well as to the characteristics of the input. Many scientific experiments result in incomplete data and require approximation or guessing to fill in the missing information. The numerical solution of inverse problems is typically sensitive to errors and uncertainties in the measurement data. From a forward problem point of view, uncertainty quantification (UQ) studies the propagation of uncertainty from the input to the output of a system; from an inverse problems point of view, it relates the uncertainty in the output/measurement to the uncertainty in the input/cause and in the system itself.
Bayesian inference is a way of making statistical inferences in which the statistician assigns subjective probabilities to distributions that could generate the data. The uncertainty quantification uses such methods to study and characterize the sensitivity in terms of probability densities. Bayesian inference is a powerful framework for solving inverse problems, which involve inferring the unknown parameters or functions of a model given noisy or incomplete observations. In some cases, the observations may be given in terms of functionals of the unknown function or parameter, which can make the inference problem more challenging.
In many applications of inverse problems, we face the problem of estimating continuous quantities from indirect measurements. In addition, we must quantify uncertainties in such an estimate. Brute-force discretization of these quantities often results in inaccurate numerical solutions for such inverse problems. On the other hand, discretization-invariant Bayesian approaches for inverse problems treat unknown continuous quantities as infinite-dimensional functions. Therefore, they provide significant computational benefits for large-scale inverse problems. A key component for infinite-dimensional Bayesian inverse problems is to identify an appropriate prior distribution for the unknown continuous functions. The WhittleMat´ern priors are a popular choice for such unknowns which provide control over regularity and correlation length. However, incorporating spatial inhomogeneity, anisotropy, and local irregularities into such priors is challenging.
Crucial to the iterative process, and to the properties of the resulting proposed solutions of the inverse problem, is the chosen sampling method. Specifically, to be efficient and to produce relevant solution proposals, the sampling must reflect the conditions imposed a priori on the solution of the inverse problem, and based on the setup of the problem, the prior physical knowledge, intuitive expectations of solution properties, etc. Solving the inverse problem requires sophisticated mathematical techniques, such as optimization, regression, and machine learning, which can help identify the most likely input or cause given the available data. Applications of the inverse problem include image and signal processing, medical imaging, geophysics, and many other fields where data analysis is essential.
This Ph.D. focuses on the theoretical and computational aspects of sampling from function spaces, conditioned on a priori constraints, in terms of solving pseudo-differential equations of the type
(1) (k(x)I − ∆g)s(x)u = ψ in R^d.
Here g is a Riemannian metric, ψ is a Gaussian white noise term, while k and s are deterministic or stochastic functions in some specified function spaces. We solve the above equation by finding a left parametrix Q of (k(x)I −∆g)s(x) and applying it to (1), allowing us to isolate u in terms of
(2) u = Qψ + Ku
for some smoothing operator K ∈ OpS^{−∞} (Rd×Rd) with mapping properties that allow explicit study. Thus, a major goal of this project is to study generalizations of a modern formalism for sampling conditioned on functionals and to develop analytical aspects of the probabilistic model used in Bayesian inference with Gaussian processes. Furthermore, we shall produce, validate, and use numerical implementations of samplers conditioned on functionals to strengthen and further inspire our analysis, as well as to achieve practical working samplers applicable in the solution of concrete inverse problems. Finally, for a selected set of challenging inverse problems, we shall custom-build and demonstrate the developed samplers.
This Ph.D. project’s goal is to integrate into the wider context of the CUQI (Computational Uncertainty Quantification for Inverse Problems) project led by Prof. Per Christian Hansen and contribute to CUQI as the developer and the test bed for advanced samplers tailored to inverse problems studied by the CUQI scientists. The project will thus also contribute to the UQ platform for modeling and computations developed by CUQI.
An ’inverse problem’ is the challenge of determining the input/cause of a given or measured output/effect. Inverse problems arise in various fields including physics, engineering, biology, and finance, where researchers attempt to infer the underlying causes or processes that generate a particular set of measurements or data. Inverse problems are often challenging because they are typically ill-posed: for example, there may be multiple possible causes or inputs that all lead to the same output, and the observed data may be noisy or incomplete. The situation is further complicated by uncertainties related to the system or model that transforms the input into the output, as well as to the characteristics of the input. Many scientific experiments result in incomplete data and require approximation or guessing to fill in the missing information. The numerical solution of inverse problems is typically sensitive to errors and uncertainties in the measurement data. From a forward problem point of view, uncertainty quantification (UQ) studies the propagation of uncertainty from the input to the output of a system; from an inverse problems point of view, it relates the uncertainty in the output/measurement to the uncertainty in the input/cause and in the system itself.
Bayesian inference is a way of making statistical inferences in which the statistician assigns subjective probabilities to distributions that could generate the data. The uncertainty quantification uses such methods to study and characterize the sensitivity in terms of probability densities. Bayesian inference is a powerful framework for solving inverse problems, which involve inferring the unknown parameters or functions of a model given noisy or incomplete observations. In some cases, the observations may be given in terms of functionals of the unknown function or parameter, which can make the inference problem more challenging.
In many applications of inverse problems, we face the problem of estimating continuous quantities from indirect measurements. In addition, we must quantify uncertainties in such an estimate. Brute-force discretization of these quantities often results in inaccurate numerical solutions for such inverse problems. On the other hand, discretization-invariant Bayesian approaches for inverse problems treat unknown continuous quantities as infinite-dimensional functions. Therefore, they provide significant computational benefits for large-scale inverse problems. A key component for infinite-dimensional Bayesian inverse problems is to identify an appropriate prior distribution for the unknown continuous functions. The WhittleMat´ern priors are a popular choice for such unknowns which provide control over regularity and correlation length. However, incorporating spatial inhomogeneity, anisotropy, and local irregularities into such priors is challenging.
Crucial to the iterative process, and to the properties of the resulting proposed solutions of the inverse problem, is the chosen sampling method. Specifically, to be efficient and to produce relevant solution proposals, the sampling must reflect the conditions imposed a priori on the solution of the inverse problem, and based on the setup of the problem, the prior physical knowledge, intuitive expectations of solution properties, etc. Solving the inverse problem requires sophisticated mathematical techniques, such as optimization, regression, and machine learning, which can help identify the most likely input or cause given the available data. Applications of the inverse problem include image and signal processing, medical imaging, geophysics, and many other fields where data analysis is essential.
This Ph.D. focuses on the theoretical and computational aspects of sampling from function spaces, conditioned on a priori constraints, in terms of solving pseudo-differential equations of the type
(1) (k(x)I − ∆g)s(x)u = ψ in R^d.
Here g is a Riemannian metric, ψ is a Gaussian white noise term, while k and s are deterministic or stochastic functions in some specified function spaces. We solve the above equation by finding a left parametrix Q of (k(x)I −∆g)s(x) and applying it to (1), allowing us to isolate u in terms of
(2) u = Qψ + Ku
for some smoothing operator K ∈ OpS^{−∞} (Rd×Rd) with mapping properties that allow explicit study. Thus, a major goal of this project is to study generalizations of a modern formalism for sampling conditioned on functionals and to develop analytical aspects of the probabilistic model used in Bayesian inference with Gaussian processes. Furthermore, we shall produce, validate, and use numerical implementations of samplers conditioned on functionals to strengthen and further inspire our analysis, as well as to achieve practical working samplers applicable in the solution of concrete inverse problems. Finally, for a selected set of challenging inverse problems, we shall custom-build and demonstrate the developed samplers.
This Ph.D. project’s goal is to integrate into the wider context of the CUQI (Computational Uncertainty Quantification for Inverse Problems) project led by Prof. Per Christian Hansen and contribute to CUQI as the developer and the test bed for advanced samplers tailored to inverse problems studied by the CUQI scientists. The project will thus also contribute to the UQ platform for modeling and computations developed by CUQI.
Status | Active |
---|---|
Effective start/end date | 01/12/2022 → 30/11/2025 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.