Abstract
In cocktail-party environments, listeners are able to comprehend and localize multiple simultaneous talkers. With current virtual reality (VR) technology and virtual acoustics it has become possible to present an audio-visual cocktail-party in a controlled laboratory environment. A new continuous speech corpus with ten monologues from five female and five male talkers was designed and recorded. Each monologue contained a substantially different topic. Using an egocentric interaction method in VR, subjects were asked to label perceived talkers according to source position and content of speech, while varying the number of simultaneously presented talkers. With an increasing number of talkers, the subjects’ accuracy in performing this task was found to decrease. When more than six talkers were in a scene, the number of talkers was underestimated and the azimuth localization error increased. With this method, a new approach is presented to gauge listeners’ ability to analyze complex audio-visual scenes.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Symposium on Auditory and Audiological Research : Auditory Learning in Biological and Artificial Systems |
Volume | 7 |
Publisher | The Danavox Jubilee Foundation |
Publication date | 2020 |
Pages | 357-364 |
Publication status | Published - 2020 |
Event | International Symposium on Auditory and Audiological Research: Auditory Learning in Biological and Artificial Systems - Nyborg, Denmark Duration: 21 Aug 2020 → 23 Aug 2020 Conference number: 7 http://isaar.eu |
Conference
Conference | International Symposium on Auditory and Audiological Research |
---|---|
Number | 7 |
Country/Territory | Denmark |
City | Nyborg |
Period | 21/08/2020 → 23/08/2020 |
Internet address |
Series | Proceedings of the International Symposium on Audiological and Auditory Research |
---|---|
Volume | 7 |
ISSN | 2596-5522 |
Keywords
- Auditory scene analysis
- Speech perception
- Virtual Reality