Abstract
We introduce a two-alternative forced-choice experimental paradigm to quantify expressed emotions in music using the two wellknown arousal and valence (AV) dimensions. In order to produce AV scores from the pairwise comparisons and to visualize the locations of excerpts in the AV space, we introduce a flexible Gaussian process (GP) framework which learns from the pairwise comparisons directly. A novel dataset is used to evaluate the proposed framework and learning curves show that the proposed framework needs relative few comparisons in order to achieve satisfactory performance. This is further supported by visualizing the learned locations of excerpts in the AV space. Finally, by examining the predictive performance of the user-specific models we show the importance of modeling subjects individually due to significant subjective differences.
Original language | English |
---|---|
Title of host publication | 9th International Symposium on Computer Music Modelling and Retrieval (CMMR 2012) |
Publisher | Queen Mary University of London |
Publication date | 2012 |
Pages | 526-533 |
Publication status | Published - 2012 |
Event | 9th International Symposium on Computer Music Modelling and Retrieval (CMMR 2012) - Queen Mary University of London, London, United Kingdom Duration: 19 Jun 2012 → 22 Jun 2012 http://www.cmmr2012.eecs.qmul.ac.uk/ |
Conference
Conference | 9th International Symposium on Computer Music Modelling and Retrieval (CMMR 2012) |
---|---|
Location | Queen Mary University of London |
Country/Territory | United Kingdom |
City | London |
Period | 19/06/2012 → 22/06/2012 |
Internet address |