Multistage audiovisual integration of speech: dissociating identification and detection

Kasper Eskelund, Jyrki Tuomainen, Tobias Andersen

    Research output: Contribution to journalJournal articleResearchpeer-review

    Abstract

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.
    Original languageEnglish
    JournalExperimental Brain Research
    Volume208
    Pages (from-to)447-457
    ISSN0014-4819
    DOIs
    Publication statusPublished - 2011

    Keywords

    • Audiovisual speech perception, Sine wave speech, Multisensory integration, Speech identification, Speech detection, McGurk Illusion

    Fingerprint Dive into the research topics of 'Multistage audiovisual integration of speech: dissociating identification and detection'. Together they form a unique fingerprint.

    Cite this