Comparison of the room’s dimensions and absorption distribution estimation performance using wave-based and geometrical acoustics dataset

Yuanxin Xia, Zhihan Guo, Cheol-Ho Jeong*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

21 Downloads (Pure)

Abstract

Deep neural networks (DNNs) are trained to extract the room dimensions and absorption configurations from room transfer function (TF) measurements. This study investigates the performance of DNNs in room acoustic analyses, which are trained with wave-based (WB) and geometrical acoustics (GA) simulation data. WB simulation data provide a physically accurate representation of room acoustics including diffraction and interference, albeit with substantial computation demands. In contrast, GA data can be obtained more rapidly, but with reduced accuracy. We found that the DNN trained with WB training data exhibits enhanced estimation performance and generalization capabilities when applied to real-world measurements. This study underscores the trade-offs between training dataset generation speed and their performance of machine learning algorithms in the inverse problem
Original languageEnglish
Title of host publicationProceedings of 10th Convention of the European Acoustics Association
Number of pages7
Publication date2023
Publication statusPublished - 2023
Event10th Convention of the European Acoustics Association - Politecnico di Torino, Torino, Italy
Duration: 11 Sept 202315 Sept 2023
https://www.fa2023.org/

Conference

Conference10th Convention of the European Acoustics Association
LocationPolitecnico di Torino
Country/TerritoryItaly
CityTorino
Period11/09/202315/09/2023
Internet address

Keywords

  • Machine learning
  • Inverse problem
  • Room acoustic simulation
  • Absorption coefficient
  • Geometrical acoustics
  • Wave-based simulation

Fingerprint

Dive into the research topics of 'Comparison of the room’s dimensions and absorption distribution estimation performance using wave-based and geometrical acoustics dataset'. Together they form a unique fingerprint.

Cite this