Membership Inference Attacks for Face Images Against Fine-Tuned Latent Diffusion Models

Lauritz Christian Holme, Anton Mosquera Storgaard, Siavash Arjomand Bigdeli

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

6 Downloads (Orbit)

Abstract

The rise of generative image models leads to privacy concerns when it comes to the huge datasets used to train such models. This paper investigates the possibility of inferring if a set of face images was used for fine-tuning a Latent Diffusion Model (LDM). A Membership Inference Attack (MIA) method is presented for this task. Using generated auxiliary data for the training of the attack model leads to significantly better performance, and so does the use of watermarks. The guidance scale used for inference was found to have a significant influence. If a LDM is fine-tuned for long enough, the text prompt used for inference has no significant influence. The proposed MIA is found to be viable in a realistic black-box setup against LDMs fine-tuned on face-images.
Original languageEnglish
Title of host publicationProceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Volume2
PublisherSCITEPRESS Digital Library
Publication date2025
Pages439-446
ISBN (Electronic)978-989-758-728-3
DOIs
Publication statusPublished - 2025
Event20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Porto, Portugal
Duration: 26 Feb 202528 Feb 2025

Conference

Conference20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Country/TerritoryPortugal
CityPorto
Period26/02/202528/02/2025

Keywords

  • Latent Diffusion Model
  • Membership Inference Attack

Fingerprint

Dive into the research topics of 'Membership Inference Attacks for Face Images Against Fine-Tuned Latent Diffusion Models'. Together they form a unique fingerprint.

Cite this