Variance reduction of diffusion model's gradients with Taylor approximation-based control variate

Paul Jeha, Will Grathwohl, Michael Riis Andersen, Carl Henrik Ek, Jes Frellsen

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

22 Downloads (Pure)

Abstract

Score-based models, trained with denoising score matching, are remarkably effective in generating high dimensional data. However, the high variance of their training objective hinders optimisation. We attempt to reduce it with a control variate, derived via a k-th order Taylor expansion on the training objective and its gradient. We prove an equivalence between the two and demonstrate empirically the effectiveness of our approach on a low dimensional problem setting; and study its effect on larger problems.
Original languageEnglish
Title of host publicationProceedings of the Structured Probabilistic Inference & Generative Modeling workshop of ICML 2024
Number of pages14
Publication date2024
Publication statusPublished - 2024
EventICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling - Vienna, Austria
Duration: 26 Jul 202426 Jul 2024

Workshop

WorkshopICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling
Country/TerritoryAustria
CityVienna
Period26/07/202426/07/2024

Fingerprint

Dive into the research topics of 'Variance reduction of diffusion model's gradients with Taylor approximation-based control variate'. Together they form a unique fingerprint.

Cite this