Flexible priors for rough feature reconstruction in Bayesian inversion
Senchukova, Angelina (2024-10-04)
Väitöskirja
Senchukova, Angelina
04.10.2024
Lappeenranta-Lahti University of Technology LUT
Acta Universitatis Lappeenrantaensis
School of Engineering Science
School of Engineering Science, Laskennallinen tekniikka
Kaikki oikeudet pidätetään.
Julkaisun pysyvä osoite on
https://urn.fi/URN:ISBN:978-952-412-118-7
https://urn.fi/URN:ISBN:978-952-412-118-7
Kuvaus
ei tietoa saavutettavuudesta
Tiivistelmä
Inverse problems arise when the quantity of interest needs to be reconstructed from its noisy indirect measurements. Numerous examples of inverse problems include remote sensing, weather prediction, and medical imaging. Despite different application areas, inverse problems share the common peculiarity of being ill-posed. Mathematically speaking, this property implies that a solution does not exist, is not unique, or does not continuously depend on the measurement data. In practice, this typically means that a stable solution can be only obtained, if some suitable regularization technique is used.
Regularization can be perceived as a form of restriction enforcing the solution to have desired properties. For example, in the Bayesian approach to inverse problems, all quantities are treated as random variables, and the role of regularization is played by the prior distribution that encodes our a priori knowledge on the solution. Given the prior, measurements, and computational model, the solution to a Bayesian inverse problem is formulated as posterior distribution using Bayes’ theorem. The prior distribution can be practically constructed to promote specific behavior of the solution. In this thesis, we aim at designing flexible priors that are useful in reconstructing rough features, that is, modeling different types of behavior in the unknown quantity of interest ranging from smooth to piecewise constant.
In pursuit of such flexibility, we introduce two classes of priors based on Student’s t distribution. In the first class, we combine the t distribution with a Markov random field structure and employ Gaussian scale mixture representation to facilitate posterior characterization. In the other class, we explore neural network priors with parameters sampled from the t distribution. We test both priors for solving different inverse problems, including signal deconvolution, deblurring, and X-ray computed tomography. In addition, we consider a real-life example of a severely ill-posed problem, namely sparse X-ray tomography for imaging of wood logs.
Regularization can be perceived as a form of restriction enforcing the solution to have desired properties. For example, in the Bayesian approach to inverse problems, all quantities are treated as random variables, and the role of regularization is played by the prior distribution that encodes our a priori knowledge on the solution. Given the prior, measurements, and computational model, the solution to a Bayesian inverse problem is formulated as posterior distribution using Bayes’ theorem. The prior distribution can be practically constructed to promote specific behavior of the solution. In this thesis, we aim at designing flexible priors that are useful in reconstructing rough features, that is, modeling different types of behavior in the unknown quantity of interest ranging from smooth to piecewise constant.
In pursuit of such flexibility, we introduce two classes of priors based on Student’s t distribution. In the first class, we combine the t distribution with a Markov random field structure and employ Gaussian scale mixture representation to facilitate posterior characterization. In the other class, we explore neural network priors with parameters sampled from the t distribution. We test both priors for solving different inverse problems, including signal deconvolution, deblurring, and X-ray computed tomography. In addition, we consider a real-life example of a severely ill-posed problem, namely sparse X-ray tomography for imaging of wood logs.
Kokoelmat
- Väitöskirjat [1190]
