Generative adversarial networks with physical sound field priors

Xenofon Karakonstantis, Efren Fernandez-Grande

Research output: Contribution to journalJournal articleResearchpeer-review

36 Downloads (Pure)

Abstract

This paper presents a deep learning-based approach for the spatiotemporal reconstruction of sound fields using generative adversarial networks. The method utilises a plane wave basis and learns the underlying statistical distributions of pressure in rooms to accurately reconstruct sound fields from a limited number of measurements. The performance of the method is evaluated using two established datasets and compared to state-of-the-art methods. The results show that the model is able to achieve an improved reconstruction performance in terms of accuracy and energy retention, particularly in the high-frequency range and when extrapolating beyond the measurement region. Furthermore, the proposed method can handle a varying number of measurement positions and configurations without sacrificing performance. The results suggest that this approach provides a promising approach to sound field reconstruction using generative models that allow for a physically informed prior to acoustics problems.
Original languageEnglish
JournalJournal of the Acoustical Society of America
Volume154
Issue number2
Pages (from-to)1226-1238
ISSN0001-4966
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Generative adversarial networks with physical sound field priors'. Together they form a unique fingerprint.

Cite this