Deep Diffeomorphic Transformer Networks

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

39 Downloads (Pure)


Spatial Transformer layers allow neural networks, at least in principle, to be invariant to large spatial transformations in image data. The model has, however, seen limited uptake as most practical implementations support only transformations that are too restricted, e.g. affine or homographic maps, and/or destructive maps, such as thin plate splines. We investigate the use of flexible diffeomorphic image transformations within such networks and demonstrate that significant performance gains can be attained over currently-used models. The learned transformations are found to be both simple and intuitive, thereby providing insights into individual problem domains. With the proposed framework, a standard convolutional neural network matches state-of-the-art results on face verification with only two extra lines of simple TensorFlow code.
Original languageEnglish
Title of host publicationProceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Publication date2018
ISBN (Electronic)978-1-5386-6420-9
Publication statusPublished - 2018
Event2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition - Salt Lake City, United States
Duration: 18 Jun 201823 Jun 2018


Conference2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
CountryUnited States
CitySalt Lake City
SeriesI E E E Conference on Computer Vision and Pattern Recognition. Proceedings


Dive into the research topics of 'Deep Diffeomorphic Transformer Networks'. Together they form a unique fingerprint.

Cite this