Abstract
Segmentation uncertainty models predict a distribution over plausible segmentations for a given input, which they learn from the annotator variation in the training set. However, in practice these annotations can differ systematically in the way they are generated, for example through the use of different labeling tools. This results in datasets that contain both data variability and differing label styles. In this paper, we demonstrate that applying state-of-the-art segmentation uncertainty models on such datasets can lead to model bias caused by the different label styles. We present an updated modelling objective conditioning on labeling style for aleatoric uncertainty estimation, and modify two state-of-the-art-architectures for segmentation uncertainty accordingly. We show with extensive experiments that this method reduces label style bias, while improving segmentation performance, increasing the applicability of segmentation uncertainty models in the wild. We curate two datasets, with annotations in different label styles, which we will make publicly available along with our code upon publication.
Original language | English |
---|---|
Title of host publication | Proceedings of Eleventh International Conference on Learning Representations |
Number of pages | 19 |
Publication date | 2023 |
Publication status | Published - 2023 |
Event | Eleventh International Conference on Learning Representations - Kigali Convention Centre, Kigali , Rwanda Duration: 1 May 2023 → 5 May 2023 Conference number: 11 https://iclr.cc/Conferences/2023 |
Conference
Conference | Eleventh International Conference on Learning Representations |
---|---|
Number | 11 |
Location | Kigali Convention Centre |
Country/Territory | Rwanda |
City | Kigali |
Period | 01/05/2023 → 05/05/2023 |
Internet address |