Explainability and generalizability of glaucoma detection models
Mojsiejuk, Wojciech (2023)
Diplomityö
Mojsiejuk, Wojciech
2023
School of Engineering Science, Laskennallinen tekniikka
Kaikki oikeudet pidätetään.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2023060151635
https://urn.fi/URN:NBN:fi-fe2023060151635
Tiivistelmä
Glaucoma is a group of sight-threatening eye diseases for which, to date, there is no causal treatment. Based on the current prognosis, it is estimated that the number of people affected by this disease will increase from 76 million in 2020 to 111 million in 2040. The intrinsic problem in open-angle glaucoma detection (the most common type of glaucoma) lies in the lack of visible symptoms in the early stages of its development. Recent advancements in artificial intelligence techniques have enabled high-performance state-of-the-art models for major ophthalmic diseases detection. However, because of the black-box nature of neural networks and their uncertain capability to generalize for future data, they are confined to be applied in clinical practice. Explainable artificial intelligence methods are lately used to gain an insightful understanding of computer-aided diagnoses. This thesis uses open-access datasets on glaucomatous fundus including RIM-ONE DL and ACRIMA, and applies post-hoc explainability tools to state-of-the-art models to evaluate their robustness, stability, and consistency in glaucoma prediction.
