Improving Music Genre Classification by Short Time Feature Integration

Anders Meng, Peter Ahrendt, Jan Larsen

    Research output: Contribution to conferencePosterResearch

    378 Downloads (Pure)

    Abstract

    Many different short-time features (derived from 10-30ms of audio) have been proposed for music segmentation, retrieval and genre classification. Often the available time frame of the music to make a decision (the decision time horizon) is in the range of seconds instead of milliseconds. The problem of making new features on the larger time scale from the short-time features (feature integration) has only received little attention. This paper investigates different methods for feature integration (early information fusion) and late information fusion (assembling of probabilistic outputs or decisions from the classifier, e.g. majority voting) for music genre classification.
    Original languageEnglish
    Publication date2005
    Publication statusPublished - 2005
    EventIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005) - Philadelphia, United States
    Duration: 18 Mar 200523 Mar 2005

    Conference

    ConferenceIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005)
    CountryUnited States
    CityPhiladelphia
    Period18/03/200523/03/2005

    Keywords

    • Information Fusion
    • Music genre
    • Autoregressive Model

    Fingerprint Dive into the research topics of 'Improving Music Genre Classification by Short Time Feature Integration'. Together they form a unique fingerprint.

    Cite this