How Few Annotations are Needed for Segmentation Using a Multi-planar U-Net?

William Michael Laprade*, Mathias Perslev, Jon Sporring

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

Abstract

U-Net architectures are an extremely powerful tool for segmenting 3D volumes, and the recently proposed multi-planar U-Net has reduced the computational requirement for using the U-Net architecture on three-dimensional isotropic data to a subset of two-dimensional planes. While multi-planar sampling considerably reduces the amount of training data needed, providing the required manually annotated data can still be a daunting task. In this article, we investigate the multi-planar U-Net’s ability to learn three-dimensional structures in isotropic sampled images from sparsely annotated training samples. We extend the multi-planar U-Net with random annotations, and we present our empirical findings on two public domains, fully annotated by an expert. Surprisingly we find that the multi-planar U-Net on average outperforms the 3D U-Net in most cases in terms of dice, sensitivity, and specificity and that similar performance from the multi-planar unit can be obtained from half the number of annotations by doubling the number of automatically generated training planes. Thus, sometimes less is more!
Original languageEnglish
Title of host publicationMICCAI Workshop on Deep Generative Models
PublisherSpringer
Publication date2021
Pages209-216
ISBN (Print)978-3-030-88209-9
DOIs
Publication statusPublished - 2021
SeriesLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13003
ISSN0302-9743

Keywords

  • 3D imaging
  • Deep learning
  • Segmentation
  • Sparse annotations
  • U-Net

Fingerprint

Dive into the research topics of 'How Few Annotations are Needed for Segmentation Using a Multi-planar U-Net?'. Together they form a unique fingerprint.

Cite this