Deep Diffeomorphic Transformer Networks

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

232 Downloads (Pure)

Abstract

Spatial Transformer layers allow neural networks, at least in principle, to be invariant to large spatial transformations in image data. The model has, however, seen limited uptake as most practical implementations support only transformations that are too restricted, e.g. affine or homographic maps, and/or destructive maps, such as thin plate splines. We investigate the use of flexible diffeomorphic image transformations within such networks and demonstrate that significant performance gains can be attained over currently-used models. The learned transformations are found to be both simple and intuitive, thereby providing insights into individual problem domains. With the proposed framework, a standard convolutional neural network matches state-of-the-art results on face verification with only two extra lines of simple TensorFlow code.
Original languageEnglish
Title of host publicationProceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
PublisherIEEE
Publication date2018
Pages4403-4412
ISBN (Electronic)978-1-5386-6420-9
DOIs
Publication statusPublished - 2018
Event2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition - Salt Lake City, United States
Duration: 18 Jun 201823 Jun 2018
https://ieeexplore.ieee.org/xpl/conhome/8576498/proceeding?isnumber=8578098&refinementName=Author

Conference

Conference2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Country/TerritoryUnited States
CitySalt Lake City
Period18/06/201823/06/2018
Internet address
SeriesI E E E Conference on Computer Vision and Pattern Recognition. Proceedings
ISSN1063-6919

Fingerprint

Dive into the research topics of 'Deep Diffeomorphic Transformer Networks'. Together they form a unique fingerprint.

Cite this