Audio–visual speech perception is special

Jyrki Tuomainen, Tobias Andersen, Kaisa Tiippana, Mikko Sams

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio–visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception.
Keyword: Audio–visual speech perception,Multisensory integration,Selective attention,Sine wave speech
Original languageEnglish
JournalCognition
Volume96
Issue number1
Pages (from-to)B13-B22
ISSN0010-0277
DOIs
Publication statusPublished - 2005
Externally publishedYes

Fingerprint

Dive into the research topics of 'Audio–visual speech perception is special'. Together they form a unique fingerprint.

Cite this