DiffEnc: Variational Diffusion with a Learned Encoder

Beatrix M.G. Nielsen*, Anders Christensen, Andrea Dittadi, Ole Winther

*Corresponding author for this work

Research output: Contribution to conferencePaperResearchpeer-review

8 Downloads (Pure)

Abstract

Diffusion models may be viewed as hierarchical variational autoencoders (VAEs) with two improvements: parameter sharing for the conditional distributions in the generative process and efficient computation of the loss as independent terms over the hierarchy. We consider two changes to the diffusion model that retain these advantages while adding flexibility to the model. Firstly, we introduce a data- and depth-dependent mean function in the diffusion process, which leads to a modified diffusion loss. Our proposed framework, DiffEnc, achieves a statistically significant improvement in likelihood on CIFAR-10. Secondly, we let the ratio of the noise variance of the reverse encoder process and the generative process be a free weight parameter rather than being fixed to 1. This leads to theoretical insights: For a finite depth hierarchy, the evidence lower bound (ELBO) can be used as an objective for a weighted diffusion loss approach and for optimizing the noise schedule specifically for inference. For the infinite-depth hierarchy, on the other hand, the weight parameter has to be 1 to have a well-defined ELBO.
Original languageEnglish
Publication date2024
Number of pages33
Publication statusPublished - 2024
EventThe Twelfth International Conference on Learning Representations - Vienna, Austria
Duration: 7 May 202411 May 2024
Conference number: 12

Conference

ConferenceThe Twelfth International Conference on Learning Representations
Number12
Country/TerritoryAustria
CityVienna
Period07/05/202411/05/2024

Fingerprint

Dive into the research topics of 'DiffEnc: Variational Diffusion with a Learned Encoder'. Together they form a unique fingerprint.

Cite this