Interpretability in Intelligent Systems – A New Concept?

Lars Kai Hansen*, Laura Rieger

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

Abstract

The very active community for interpretable machine learning can learn from the rich 50+ year history of explainable AI. We here give two specific examples from this legacy that could enrich current interpretability work: First, Explanation desiderata were we point to the rich set of ideas developed in the ‘explainable expert systems’ field and, second, tools for quantification of uncertainty of high-dimensional feature importance maps which have been developed in the field of computational neuroimaging.

Original languageEnglish
Title of host publicationArtificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer
Publication date1 Jan 2019
Pages41-49
ISBN (Print)9783030289539
DOIs
Publication statusPublished - 1 Jan 2019
SeriesLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11700 LNCS
ISSN0302-9743

Keywords

  • Interpretable AI
  • Machine learning
  • Uncertainty quantification

Cite this

Hansen, L. K., & Rieger, L. (2019). Interpretability in Intelligent Systems – A New Concept? In Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 41-49). Springer. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol.. 11700 LNCS https://doi.org/10.1007/978-3-030-28954-6_3