EGAIN: Extended GAn INversion

Wassim Kabbani, Marcel Grimmer, Christoph Busch

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Generative Adversarial Networks (GANs) have witnessed significant advances in recent years, generating increasingly higher quality images, which are non-distinguishable from real ones. Recent GANs have proven to encode features in a disentangled latent space, enabling precise control over various semantic attributes of the generated facial images such as pose, illumination, or gender. GAN inversion, which is projecting images into the latent space of a GAN, opens the door for the manipulation of facial semantics of real face images. This is useful for numerous applications such as evaluating the performance of face recognition systems. In this work, EGAIN, an architecture for constructing GAN inversion models, is presented. This architecture explicitly addresses some of the shortcomings in previous GAN inversion models. A specific model with the same name, egain, based on this architecture is also proposed, demonstrating superior reconstruction quality over state-of-the-art models, and illustrating the validity of the EGAIN architecture.

Original languageEnglish
Title of host publicationProceedings of the 10th European Workshop on Visual Information Processing
Number of pages6
PublisherIEEE
Publication date2022
ISBN (Electronic)9781665466233
DOIs
Publication statusPublished - 2022
Event10th European Workshop on Visual Information Processing - Lisbon, Portugal
Duration: 11 Sept 202214 Sept 2022

Conference

Conference10th European Workshop on Visual Information Processing
Country/TerritoryPortugal
CityLisbon
Period11/09/202214/09/2022

Keywords

  • Face Recognition
  • GAN
  • GAN Inversion

Fingerprint

Dive into the research topics of 'EGAIN: Extended GAn INversion'. Together they form a unique fingerprint.

Cite this