Modeling text with generalizable Gaussian mixtures

Lars Kai Hansen, Sigurdur Sigurdsson, Thomas Kolenda, Finn Årup Nielsen, Ulrik Kjems, Jan Larsen

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    738 Downloads (Pure)


    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss the relation between supervised and unsupervised learning in the test data. Finally, we implement a novelty detector based on the density model.
    Original languageEnglish
    Title of host publicationIEEE Procedings of Acoustics, Speech, and Signal Processing
    Place of PublicationIstanbul, Turkey
    Publication date2000
    ISBN (Print)0-7803-6293-4
    Publication statusPublished - 2000
    Event1995 IEEE International Conference on Acoustics, Speech, and Signal Processing - Detroit, United States
    Duration: 8 May 199512 May 1995
    Conference number: 20


    Conference1995 IEEE International Conference on Acoustics, Speech, and Signal Processing
    Country/TerritoryUnited States

    Bibliographical note

    Copyright: 2000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE


    Dive into the research topics of 'Modeling text with generalizable Gaussian mixtures'. Together they form a unique fingerprint.

    Cite this