Adapting the Theory of Visual Attention (TVA) to model auditory attention

Katherine L. Roberts, Tobias Andersen, Søren Kyllingsbæk, Koen Lamberts

Research output: Contribution to conferenceConference abstract for conferenceResearchpeer-review

186 Downloads (Pure)

Abstract

Mathematical and computational models have provided useful insights into normal and impaired visual attention, but less progress has been made in modelling auditory attention. We are developing a Theory of Auditory Attention (TAA), based on an influential visual model, the Theory of Visual Attention (TVA). We report that TVA provides a good fit to auditory data when the stimuli are closely matched to those used in visual studies. In the basic visual TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of letters (e.g., the red letters; partial report). For the auditory task, we used dichotic, concurrently-presented synthesised vowels. These auditory stimuli are closely-matched to the visual stimuli, in that they are simultaneous, separated in space, and unchanging over time. We found that TVA could successfully model the auditory data, producing good estimates of the rate at which information is encoded (C), the minimum exposure duration required for processing to begin (t0), and the relative attentional weight to targets versus distractors (α). Future work will address the issue of target-distractor confusion, and extend the model to accommodate stimuli that vary in their spectro-temporal profile.
Original languageEnglish
Publication date2014
Number of pages1
Publication statusPublished - 2014
EventLondon Meeting of the Experimental Psychology Society - London, United Kingdom
Duration: 9 Jan 201410 Jan 2014
http://www.eps.ac.uk/

Seminar

SeminarLondon Meeting of the Experimental Psychology Society
Country/TerritoryUnited Kingdom
CityLondon
Period09/01/201410/01/2014
Internet address

Fingerprint

Dive into the research topics of 'Adapting the Theory of Visual Attention (TVA) to model auditory attention'. Together they form a unique fingerprint.

Cite this