Abstract
Score-based models, trained with denoising score matching, are remarkably effective in generating high dimensional data. However, the high variance of their training objective hinders optimisation. We attempt to reduce it with a control variate, derived via a k-th order Taylor expansion on the training objective and its gradient. We prove an equivalence between the two and demonstrate empirically the effectiveness of our approach on a low dimensional problem setting; and study its effect on larger problems.
Original language | English |
---|---|
Title of host publication | Proceedings of the Structured Probabilistic Inference & Generative Modeling workshop of ICML 2024 |
Number of pages | 14 |
Publication date | 2024 |
Publication status | Published - 2024 |
Event | ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling - Vienna, Austria Duration: 26 Jul 2024 → 26 Jul 2024 |
Workshop
Workshop | ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling |
---|---|
Country/Territory | Austria |
City | Vienna |
Period | 26/07/2024 → 26/07/2024 |