Abstract
Many different short-time features (derived from 10-30ms of audio) have been proposed for music segmentation, retrieval and genre classification. Often the available time frame of the music to make a decision (the decision time horizon) is in the range of seconds instead of milliseconds. The problem of making new features on the larger time scale from the short-time features (feature integration) has only received little attention. This paper investigates different methods for feature integration (early information fusion) and late information fusion (assembling of probabilistic outputs or decisions from the classifier, e.g. majority voting) for music genre classification.
Original language | English |
---|---|
Publication date | 2005 |
Publication status | Published - 2005 |
Event | 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing - Philadelphia, United States Duration: 18 Mar 2005 → 23 Mar 2005 Conference number: 30 http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=9711 |
Conference
Conference | 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing |
---|---|
Number | 30 |
Country/Territory | United States |
City | Philadelphia |
Period | 18/03/2005 → 23/03/2005 |
Internet address |
Keywords
- Information Fusion
- Music genre
- Autoregressive Model