Robustness of Visual Explanations to Common Data Augmentation Methods

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

As the use of deep neural networks continues to grow, understanding their behaviour has become more crucial than ever. Post-hoc explainability methods are a potential solution, but their reliability is being called into question. Our research investigates the response of post-hoc visual explanations to naturally occurring transformations, often referred to as augmentations. We anticipate explanations to be invariant under certain transformations, such as changes to the colour map while responding in an equivariant manner to transformations like translation, object scaling, and rotation. We have found remarkable differences in robustness depending on the type of transformation, with some explainability methods (such as LRP composites and Guided Backprop) being more stable than others. We also explore the role of training with data augmentation. We provide evidence that explanations are typically less robust to augmentation than classification performance, regardless of whether data augmentation is used in training or not.
Original languageEnglish
Title of host publicationProceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
PublisherIEEE
Publication date2023
Pages3715-3720
ISBN (Print)979-8-3503-0250-9
ISBN (Electronic)979-8-3503-0249-3
DOIs
Publication statusPublished - 2023
Event2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops - Vancouver, Canada
Duration: 17 Jun 202324 Jun 2023

Conference

Conference2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
Country/TerritoryCanada
CityVancouver
Period17/06/202324/06/2023

Fingerprint

Dive into the research topics of 'Robustness of Visual Explanations to Common Data Augmentation Methods'. Together they form a unique fingerprint.

Cite this