Deep Diffeomorphic Transformer Networks

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

63 Downloads (Pure)

Abstract

Spatial Transformer layers allow neural networks, at least in principle, to be invariant to large spatial transformations in image data. The model has, however, seen limited uptake as most practical implementations support only transformations that are too restricted, e.g. affine or homographic maps, and/or destructive maps, such as thin plate splines. We investigate the use of flexible diffeomorphic image transformations within such networks and demonstrate that significant performance gains can be attained over currently-used models. The learned transformations are found to be both simple and intuitive, thereby providing insights into individual problem domains. With the proposed framework, a standard convolutional neural network matches state-of-the-art results on face verification with only two extra lines of simple TensorFlow code.
Original languageEnglish
Title of host publicationProceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Number of pages10
PublisherIEEE
Publication date2018
Pages4403-4412
ISBN (Electronic)978-1-5386-6420-9
DOIs
Publication statusPublished - 2018
Event2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition - Salt Lake City, United States
Duration: 18 Jun 201823 Jun 2018

Conference

Conference2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
CountryUnited States
CitySalt Lake City
Period18/06/201823/06/2018
SeriesI E E E Conference on Computer Vision and Pattern Recognition. Proceedings
ISSN1063-6919

Cite this

Detlefsen, N. S., Freifeld, O., & Hauberg, S. (2018). Deep Diffeomorphic Transformer Networks. In Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4403-4412). IEEE. I E E E Conference on Computer Vision and Pattern Recognition. Proceedings https://doi.org/10.1109/CVPR.2018.00463