Hyppää sisältöön
    • Suomeksi
    • På svenska
    • In English
  • Suomeksi
  • In English
  • Kirjaudu
Näytä aineisto 
  •   Etusivu
  • LUTPub
  • Tieteelliset julkaisut
  • Näytä aineisto
  •   Etusivu
  • LUTPub
  • Tieteelliset julkaisut
  • Näytä aineisto
JavaScript is disabled for your browser. Some features of this site may not work without it.

An Information-Theoretic Approach to Personalized Explainable Machine Learning

Nardelli, Pedro; Jung, Alexander (2020-05-07)

Katso/Avaa
jung_et_al_an_information_postprint.pdf (283.7Kb)
Lataukset: 


Post-print / Final draft

Nardelli, Pedro
Jung, Alexander
07.05.2020

IEEE Signal Processing Letters

IEEE

School of Energy Systems

Kaikki oikeudet pidätetään.
© 2020 IEEE
https://doi.org/10.1109/LSP.2020.2993176
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2020051435520

Tiivistelmä

Automated decision making is used routinely throughout our every-day life. Recommender systems decide which jobs, movies, or other user profiles might be interesting to us. Spell checkers help us to make good use of language. Fraud detection systems decide if a credit card transactions should be verified more closely. Many of these decision making systems use machine learning methods that fit complex models to massive datasets. The successful deployment of machine learning (ML) methods to many (critical) application domains crucially depends on its explainability. Indeed, humans have a strong desire to get explanations that resolve the uncertainty about experienced phenomena like the predictions and decisions obtained from ML methods. Explainable ML is challenging since explanations must be tailored (personalized) to individual users with varying backgrounds. Some users might have received university-level education in ML, while other users might have no formal training in linear algebra. Linear regression with few features might be perfectly interpretable for the first group but might be considered a black-box by the latter.We propose a simple probabilistic model for the predictions and user knowledge. This model allows to study explainable ML using information theory. Explaining is here considered as the task of reducing the “surprise” incurred by a prediction. We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction, given the user background.

Lähdeviite

Jung, A., Nardelli, P. (2020). An Information-Theoretic Approach to Personalized Explainable Machine Learning. IEEE Signal Processing Letters. DOI: 10.1109/LSP.2020.2993176

Kokoelmat
  • Tieteelliset julkaisut [1210]
LUT-yliopisto
PL 20
53851 Lappeenranta
Ota yhteyttä | Tietosuoja | Saavutettavuusseloste
 

 

Tämä kokoelma

JulkaisuajatTekijätNimekkeetKoulutusohjelmaAvainsanatSyöttöajatYhteisöt ja kokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
LUT-yliopisto
PL 20
53851 Lappeenranta
Ota yhteyttä | Tietosuoja | Saavutettavuusseloste