Explainability as statistical inference

Hugo Henri Joseph Senetaire*, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

93 Downloads (Orbit)

Abstract

A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem. Our method is a case of amortized interpretability models, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularised maximum likelihood for our general model. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map. Using these datasets, we show experimentally that using multiple imputation provides more reasonable interpretations.
Original languageEnglish
Title of host publicationProceedings of the 40th International Conference on Machine Learning
Volume202
PublisherProceedings of Machine Learning Research
Publication date2023
Pages30584-30612
Publication statusPublished - 2023
Event40th International Conference on Machine Learning - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Conference

Conference40th International Conference on Machine Learning
Country/TerritoryUnited States
CityHonolulu
Period23/07/202329/07/2023

Fingerprint

Dive into the research topics of 'Explainability as statistical inference'. Together they form a unique fingerprint.

Cite this