Abstract
A fundamental and general representation of audio and music which integrates multi-modal data sources is important for both application and basic research purposes. In this paper we address this challenge by proposing a multi-modal version of the Latent Dirichlet Allocation model which provides a joint latent representation. We evaluate this representation on the Million Song Dataset by integrating three fundamentally different modalities, namely tags, lyrics, and audio features. We show how the resulting representation is aligned with common 'cognitive' variables such as tags, and provide some evidence for the common assumption that genres form an acceptable categorization when evaluating latent representations of music. We furthermore quantify the model by its predictive performance in terms of genre and style, providing benchmark results for the Million Song Dataset.
Original language | English |
---|---|
Journal | I E E E International Conference on Acoustics, Speech and Signal Processing. Proceedings |
Pages (from-to) | 3168 - 3172 |
ISSN | 1520-6149 |
DOIs | |
Publication status | Published - 2013 |
Event | 2013 IEEE International Conference on Acoustics, Speech and Signal Processing - Vancouver Convention and Exhibition Centre, Vancouver, Canada Duration: 26 May 2013 → 31 May 2013 Conference number: 38 http://www.icassp2013.com/ |
Conference
Conference | 2013 IEEE International Conference on Acoustics, Speech and Signal Processing |
---|---|
Number | 38 |
Location | Vancouver Convention and Exhibition Centre |
Country/Territory | Canada |
City | Vancouver |
Period | 26/05/2013 → 31/05/2013 |
Internet address |
Bibliographical note
This work was supported in part by the Danish Council for Strategic Research of the Danish Agency for Science Technology and Innovation under the CoSound project, case number 11-115328Keywords
- Signal Processing and Analysis