Explainable ai – preface

Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus Robert Müller

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

Abstract

The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.

Original languageEnglish
Title of host publicationExplainable AI: Interpreting, Explaining and Visualizing Deep Learning
PublisherSpringer
Publication date1 Jan 2019
Pagesv-vii
ISBN (Print)978-3-030-28953-9
DOIs
Publication statusPublished - 1 Jan 2019
SeriesLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11700
ISSN0302-9743

Fingerprint

Dive into the research topics of 'Explainable ai – preface'. Together they form a unique fingerprint.

Cite this