U-Net architectures are an extremely powerful tool for segmenting 3D volumes, and the recently proposed multi-planar U-Net has reduced the computational requirement for using the U-Net architecture on three-dimensional isotropic data to a subset of two-dimensional planes. While multi-planar sampling considerably reduces the amount of training data needed, providing the required manually annotated data can still be a daunting task. In this article, we investigate the multi-planar U-Net’s ability to learn three-dimensional structures in isotropic sampled images from sparsely annotated training samples. We extend the multi-planar U-Net with random annotations, and we present our empirical findings on two public domains, fully annotated by an expert. Surprisingly we find that the multi-planar U-Net on average outperforms the 3D U-Net in most cases in terms of dice, sensitivity, and specificity and that similar performance from the multi-planar unit can be obtained from half the number of annotations by doubling the number of automatically generated training planes. Thus, sometimes less is more!
|Series||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
- 3D imaging
- Deep learning
- Sparse annotations