Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate this issue, Tuomainen et al. (2005) used sine-wave speech stimuli created from three time-varying sine waves tracking the formants of a natural speech signal. Naïve observers tend not to recognize sine wave speech as speech but become able to decode its phonetic content when informed of the speech-like nature of the signal. The sine-wave speech was dubbed onto congruent and incongruent video of a talking face. Tuomainen et al. found that the McGurk effect did not occur for naïve observers, but did occur when observers were informed. This indicates that the McGurk illusion is due to a mechanism of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase their motivation for looking at the face. Since Tuomainen et al. did not monitor eye-movements in their experiments the magnitude of the effect of motivation is unknown. The purpose of our first experiment was to replicate Tuomainen et al.’s findings while controlling observers’ eye movements using a secondary visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding of Tuomainen et al. In a second experiment, we further investigated speech-specific audiovisual integration by testing if the speech-specific audiovisual perceptual mode found in experiment 1 would be advantageous in detecting a speech signal in noise. Thresholds for detecting sine-waves speech in noise were measured for naïve and informed participants. We found that the threshold for detecting speech in audiovisual stimuli was lower than for auditory-only stimuli. But there was no detection advantage for observers informed of the speech nature of the auditory signal. This may indicate that identification and detection of audiovisual speech draw on separate processes. Reference: Tuomainen, J., Andersen, T., Tiippana, K., & Sams, M. (2005). Audio-visual speech perception is special. Cognition, 96(1), B13-B22.
|Publication status||Published - 2009|
|Event||Brain and Mind Forum - Helsingør|
Duration: 1 Jan 2009 → …
|Conference||Brain and Mind Forum|
|Period||01/01/2009 → …|