Stochastic gradient descent in inverse problems
Dondjio Nguefack, Olivier Rutherford (2024)
Diplomityö
Dondjio Nguefack, Olivier Rutherford
2024
School of Engineering Science, Laskennallinen tekniikka
Kaikki oikeudet pidätetään.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2024061250765
https://urn.fi/URN:NBN:fi-fe2024061250765
Tiivistelmä
Inverse problems involve determining causes from observed effects and play an important role in a variety of fields. Traditional optimization techniques for solving inverse problems often face challenges related to computational complexity and sensitivity to noise in the data. Stochastic Gradient Descent (SGD) has emerged as a powerful tool to address these challenges, thanks to its ability to handle large datasets and robustness against noise. This thesis studies the application of SGD to solving inverse problems. With a focus on a SGD variant called tail-averaging SGD, which averages the last few iterations of the SGD algorithm to enhance the variance properties of standard SGD, we theoretically demonstrate that tail-averaged SGD converges to the excess risk minimizer, and can achieve an optimality bound, thereby offering a more reliable solution to inverse problems.
