Projects per year
Speech is perceived with vision as well as audition. In face-to-face situations, we seamlessly combine the visual information from the talker's face with the speech sound through a process of multisensory integration. This process helps us to understand speech in noisy environments, but can also change our perception of the sound. A striking example is the McGurk illusion, where the auditory syllable ba dubbed onto an incongruent visual speech video containing a ga produces the perception of hearing da a percept which does not match either the auditory or visual stimulus. The McGurk illusion illustrates that multisensory integration profoundly inuences our perception of speech in face-to-face situations. In this thesis, I present four studies with the common aim of characterising the computational processes in the brain that support audiovisual integration in speech perception. In the rst two studies, I explored electrophysiological signatures of congruent and incongruent audiovisual speech using electroencephalographic (EEG) recordings. First, I compared the brain activity evoked by the McGurk illusion to that of another audiovisual speech illusion, investigating whether they are supported by dierent types of audiovisual processing. Second, I investigated how audiovisual integration aects oscillatory activity in the brain by comparing how theta-band oscillations are aected by the perception of ambiguous audiovisual stimuli. In the third study, I developed a computational model of audiovisual speech perception, drawing on Bayesian theories of multisensory integration. I designed a comprehensive behavioural experiment to test whether the model could predict human responses to congruent and incongruent audiovisual speech in a wide range of conditions. Finally, in the fourth study, I investigated the neural correlates of my computational model of audiovisual speech perception. Using a combined behavioural and EEG experiment, I investigated whether prominent electrophysiological signatures of audiovisual integration correlated with the parameters of the model.
|Technical University of Denmark
|Number of pages
|Published - 2020
FingerprintDive into the research topics of 'Quantifying Audiovisual Integration in Speech Perception'. Together they form a unique fingerprint.
- 1 Finished
01/09/2016 → 30/09/2020