DescriptionNowadays, the pure-tone audiogram is the main tool used to characterize the degree of hearing loss and to fit hearing aids. However, the perceptual consequences of a hearing loss are typically associated not only with a loss of sensitivity, but also with a loss of clarity (distortion loss) that is not captured by the audiogram. Detailed characterization of hearing deficits can be complex and it has to be simplified in order to efficiently investigate the specific compensation needs of individual listeners. The aim of this study is to characterize individual hearing deficits by means of a test battery that allows to capture the diverse aspects of hearing loss, considering not only the loss of sensitivity but also supra-threshold distortions.
It was hypothesized that any listeners’ hearing can be characterized along two dimensions: distortion type I and distortion type II. While distortion type I can be linked to factors affecting audibility, distortion type II is considered as a non-audibility-related distortion, or clarity loss. To evaluate our hypothesis, the data from two studies was re-analyzed using a data-driven approach. Both studies carried out an extensive battery of psychoacoustic tests on potential hearing-aid users. The new analysis was based on an archetypal analysis and uses unsupervised learning to identify extreme patterns in the data which provide the basis for different auditory profiles. Subsequently, a decision tree was obtained that enables a simple classification of the listeners into one of the profiles.
This novel approach provided evidence for the existence of four different “auditory profiles” in the data. The most significant predictors for the profile identification were related to temporal processing, peripheral compression, and speech perception. The current approach is promising for identifying the most relevant tests for auditory profiling and considering new fitting strategies based on the individual’s deficits.
|Period||19 Aug 2017|
|Event title||1st International workshop on Challenges in Assistive Devices Technology (CHAT-2017): Interspeech2017|