A computational model of human auditory signal processing and perception

Morten Løve Jepsen, Stephan D. Ewert, Torsten Dau

Research output: Contribution to journalJournal articleResearchpeer-review

2324 Downloads (Pure)

Abstract

A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications.
Original languageEnglish
JournalJournal of the Acoustical Society of America
Volume124
Issue number1
Pages (from-to)422-438
ISSN0001-4966
DOIs
Publication statusPublished - 2008

Bibliographical note

Copyright (2008) Acoustical Society of America. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the Acoustical Society of America.

Fingerprint

Dive into the research topics of 'A computational model of human auditory signal processing and perception'. Together they form a unique fingerprint.

Cite this