Abstract
Denoising diffusion probabilistic models (DDPM) are powerful hierarchical
latent variable models with remarkable sample generation quality and training
stability. These properties can be attributed to parameter sharing in the
generative hierarchy, as well as a parameter-free diffusion-based inference
procedure. In this paper, we present Few-Shot Diffusion Models (FSDM), a
framework for few-shot generation leveraging conditional DDPMs. FSDMs are
trained to adapt the generative process conditioned on a small set of images
from a given class by aggregating image patch information using a set-based
Vision Transformer (ViT). At test time, the model is able to generate samples
from previously unseen classes conditioned on as few as 5 samples from that
class. We empirically show that FSDM can perform few-shot generation and
transfer to new datasets. We benchmark variants of our method on complex vision
datasets for few-shot learning and compare to unconditional and conditional
DDPM baselines. Additionally, we show how conditioning the model on patch-based
input set information improves training convergence.
Original language | English |
---|---|
Title of host publication | Proceedings of 2022 Workshop on Score-Based Methods |
Number of pages | 25 |
Publication date | 2022 |
Publication status | Published - 2022 |
Event | NeurIPS 2022 Workshop on Score-Based Methods - New Orleans Convention Center, New Orleans, United States Duration: 2 Dec 2022 → 2 Dec 2022 https://score-based-methods-workshop.github.io/ |
Workshop
Workshop | NeurIPS 2022 Workshop on Score-Based Methods |
---|---|
Location | New Orleans Convention Center |
Country/Territory | United States |
City | New Orleans |
Period | 02/12/2022 → 02/12/2022 |
Internet address |