Image Super-Resolution with Deep Variational Autoencoders

Darius Chira*, Ilian Haralampiev, Ole Winther, Andrea Dittadi, Valentin Liévin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Image super-resolution (SR) techniques are used to generate a high-resolution image from a low-resolution image. Until now, deep generative models such as autoregressive models and Generative Adversarial Networks (GANs) have proven to be effective at modelling high-resolution images. VAE-based models have often been criticised for their feeble generative performance, but with new advancements such as VDVAE, there is now strong evidence that deep VAEs have the potential to outperform current state-of-the-art models for high-resolution image generation. In this paper, we introduce VDVAE-SR, a new model that aims to exploit the most recent deep VAE methodologies to improve upon the results of similar models. VDVAE-SR tackles image super-resolution using transfer learning on pretrained VDVAEs. The presented model is competitive with other state-of-the-art models, having comparable results on image quality metrics.
Original languageEnglish
Title of host publicationProceedings of Computer Vision : ECCV 2022 Workshops
Volume13802
Publication date2023
Pages395-411
ISBN (Print) 978-3-031-25062-0
ISBN (Electronic)978-3-031-25063-7
DOIs
Publication statusPublished - 2023
Event17th European Conference on Computer Vision - Expo Tel Aviv, Tel Aviv, Israel
Duration: 23 Oct 202227 Oct 2022
https://eccv2022.ecva.net/

Conference

Conference17th European Conference on Computer Vision
LocationExpo Tel Aviv
Country/TerritoryIsrael
CityTel Aviv
Period23/10/202227/10/2022
Internet address

Keywords

  • Deep variational autoencoders
  • SR
  • Single-image super-resolution
  • Transfer learning
  • VDVAE

Fingerprint

Dive into the research topics of 'Image Super-Resolution with Deep Variational Autoencoders'. Together they form a unique fingerprint.

Cite this