Ordinal models of audiovisual speech perception

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    258 Downloads (Pure)


    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe ordinal models that can account for the McGurk illusion. We compare this type of models to the Fuzzy Logical Model of Perception (FLMP) in which the response categories are not ordered. While the FLMP generally fit the data better than the ordinal model it also employs more free parameters in complex experiments when the number of response categories are high as it is for speech perception in general. Testing the predictive power of the models using a form of cross-validation we found that ordinal models perform better than the FLMP. Based on these findings we suggest that ordinal models generally have greater predictive power because they are constrained by a priori information about the adjacency of phonetic categories.
    Original languageEnglish
    Title of host publicationProceedings of the International Symposium on Auditory and Audiological Research 2011
    Publication date2011
    Publication statusPublished - 2011
    Event3rd International Symposium on Auditory and Audiological Research - Hotel Nyborg Strand, Nyborg, Denmark
    Duration: 24 Aug 201126 Aug 2011


    Conference3rd International Symposium on Auditory and Audiological Research
    LocationHotel Nyborg Strand
    Internet address


    Dive into the research topics of 'Ordinal models of audiovisual speech perception'. Together they form a unique fingerprint.

    Cite this