Few-Shot Diffusion Models

Giorgio Giannone, Didrik Nielsen, Ole Winther

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

756 Downloads (Orbit)

Abstract

Denoising diffusion probabilistic models (DDPM) are powerful hierarchical latent variable models with remarkable sample generation quality and training stability. These properties can be attributed to parameter sharing in the generative hierarchy, as well as a parameter-free diffusion-based inference procedure. In this paper, we present Few-Shot Diffusion Models (FSDM), a framework for few-shot generation leveraging conditional DDPMs. FSDMs are trained to adapt the generative process conditioned on a small set of images from a given class by aggregating image patch information using a set-based Vision Transformer (ViT). At test time, the model is able to generate samples from previously unseen classes conditioned on as few as 5 samples from that class. We empirically show that FSDM can perform few-shot generation and transfer to new datasets. We benchmark variants of our method on complex vision datasets for few-shot learning and compare to unconditional and conditional DDPM baselines. Additionally, we show how conditioning the model on patch-based input set information improves training convergence.
Original languageEnglish
Title of host publicationProceedings of 2022 Workshop on Score-Based Methods
Number of pages25
Publication date2022
Publication statusPublished - 2022
EventNeurIPS 2022 Workshop on Score-Based Methods
- New Orleans Convention Center, New Orleans, United States
Duration: 2 Dec 20222 Dec 2022
https://score-based-methods-workshop.github.io/

Workshop

WorkshopNeurIPS 2022 Workshop on Score-Based Methods
LocationNew Orleans Convention Center
Country/TerritoryUnited States
CityNew Orleans
Period02/12/202202/12/2022
Internet address

Fingerprint

Dive into the research topics of 'Few-Shot Diffusion Models'. Together they form a unique fingerprint.

Cite this