Explainable artificial intelligence (XAI) : making AI understandable for end users
Haque, AKM Bahalul (2025-05-14)
Väitöskirja
Haque, AKM Bahalul
14.05.2025
Lappeenranta-Lahti University of Technology LUT
Acta Universitatis Lappeenrantaensis
School of Engineering Science
School of Engineering Science, Tietotekniikka
Kaikki oikeudet pidätetään.
Julkaisun pysyvä osoite on
https://urn.fi/URN:ISBN:978-952-412-239-9
https://urn.fi/URN:ISBN:978-952-412-239-9
Tiivistelmä
Explainable artificial intelligence (XAI) emerged in order to address the “black box” nature of traditional artificial intelligence (AI) models and to foster users’ informed decision-making. While XAI has been recently widely investigated, most of the research focuses on XAI’s technical perspective, such as model interpretability, algorithmic transparency, etc. This leaves a significant gap in user-centric explainability research, particularly regarding end users’ perceptions. Therefore, this dissertation aims to address this research gap to explore end users’ notion of explainability and analyze their insights in this regard. Specifically, the dissertation explores the factors that trigger end users to opt for explainability, end users’ preferences and requirements, and the possible impact of these explanations on these users. Therefore, the following research questions are addressed: (1) What are the critical research areas, user-level insights, and design-level insights discussed in previous literature? (2) What core user needs and preferences drive the demand for explainability in AI systems? (3) How can explanations be delivered to end users in an optimized manner and what are the impacts? (4) What are the design principles for user-centric XAI systems in terms of the presentation of explanation?
To address these research questions, this thesis employs a systematic literature review (SLR), semi-structured interviews, focus group discussions (FGDs), and design science research (DSR). The SLR study was conducted to investigate the research areas, research trends, research gaps, and tentative future research directions based on recent scholarly publications. Next, semi-structured interviews were conducted to elicit end users’ requirements for explainable recommendations and their impacts. These requirements revealed how end users demand the explanations to be presented to them. FGDs were conducted to investigate the drivers that motivate the end users to ask for explanations. In both cases, thematic analysis was performed to identify the themes related to the research questions and objectives of the publications. Finally, DSR was used to propose actionable design principles (DPs) for XAI-based system development. The DPs can be adopted by the practitioners and adapted according to domain and industry-specific environments.
The findings reveal that users require explanation due to content recommendation opacity, lack of transparency in the system’s decision-making, and cyber privacy and security concerns. These factors impact users informed decision-making while interacting with AI-based systems. Therefore, users require tailored explanations that align with their cognitive capabilities, categorical information representation, the option to choose supplementary information, explanations in various formats (textual, visual, or hybrid), and establish a conversation with the system for rationale generation. The findings suggest to deploy explanation verification techniques and interactive user interfaces for end users in order to foster usability and understanding. The findings are translated into actionable design principles for XAI developers and researchers using the cognitive load theory (CLT) and information quality dimension as theoretical lenses.
The findings from this thesis contribute to the development of the knowledge base of user-centric XAI by investigating end users’ requirements and preferences. This research explores the users’ requirements for explanation delivery and comes up with explanation quality dimensions that are theoretically grounded on information quality dimensions. The explanation delivery attributes also contribute to the affordance of explanation where the researcher discusses the dual effect of explainable systems. These findings can be essential in creating user-specific mental models to help deliver customized explanations. This thesis also contributes to the design knowledge of XAI by combining CLT and information quality dimensions to develop actionable design principles. These DPs provide valuable insights for AI practitioners, system designers, and policymakers in developing responsible and user-friendly AI-based systems and governance frameworks for AI systems.
To address these research questions, this thesis employs a systematic literature review (SLR), semi-structured interviews, focus group discussions (FGDs), and design science research (DSR). The SLR study was conducted to investigate the research areas, research trends, research gaps, and tentative future research directions based on recent scholarly publications. Next, semi-structured interviews were conducted to elicit end users’ requirements for explainable recommendations and their impacts. These requirements revealed how end users demand the explanations to be presented to them. FGDs were conducted to investigate the drivers that motivate the end users to ask for explanations. In both cases, thematic analysis was performed to identify the themes related to the research questions and objectives of the publications. Finally, DSR was used to propose actionable design principles (DPs) for XAI-based system development. The DPs can be adopted by the practitioners and adapted according to domain and industry-specific environments.
The findings reveal that users require explanation due to content recommendation opacity, lack of transparency in the system’s decision-making, and cyber privacy and security concerns. These factors impact users informed decision-making while interacting with AI-based systems. Therefore, users require tailored explanations that align with their cognitive capabilities, categorical information representation, the option to choose supplementary information, explanations in various formats (textual, visual, or hybrid), and establish a conversation with the system for rationale generation. The findings suggest to deploy explanation verification techniques and interactive user interfaces for end users in order to foster usability and understanding. The findings are translated into actionable design principles for XAI developers and researchers using the cognitive load theory (CLT) and information quality dimension as theoretical lenses.
The findings from this thesis contribute to the development of the knowledge base of user-centric XAI by investigating end users’ requirements and preferences. This research explores the users’ requirements for explanation delivery and comes up with explanation quality dimensions that are theoretically grounded on information quality dimensions. The explanation delivery attributes also contribute to the affordance of explanation where the researcher discusses the dual effect of explainable systems. These findings can be essential in creating user-specific mental models to help deliver customized explanations. This thesis also contributes to the design knowledge of XAI by combining CLT and information quality dimensions to develop actionable design principles. These DPs provide valuable insights for AI practitioners, system designers, and policymakers in developing responsible and user-friendly AI-based systems and governance frameworks for AI systems.
Kokoelmat
- Väitöskirjat [1110]