Skip to main content

Automated Song Annotations

dt

Automated Song Annotations

Derek Tingle and Doug Turnbull

Pandora's Music Genome Project involves paying a large number of music experts to annotate songs with a large vocabulary of tags such as "a dirty electric guitar solo", "southern rap", "a prominent banjo part". Pandora uses these annotations to generate playlists of songs for individual users (e.g., personalized Internet radio) based on similarities between the tags that represent each song. The main drawback of Pandora's approach is that it is human-labor intensive. It requires a trained music expert about 20-30 minutes to annotate each song. Derek and Doug's goal was to develop a system that can automatically annotate a song in seconds using digital signal processing and machine learning.

First, a "training set" of songs were collected. The song annotations for the training set were downloaded from Pandora's website. For each of these songs, acoustic features were extracted using digital signal processing. A statistical "tag model" was trained by combining the acoustic features from each song that had been manually labeled with the given tag. The tag models are then used to find the probability that a new song is labeled with that tag.

The probabilistic tag annotations are combined with other sources of music information (e.g., text mining websites, collaborative filtering of user preference information) to power Swarthmore's "Meerkat Music Discovery Engine". Meerkat was originally developed as a class project for the Spring 2009 course on Information retrieval and was further developed by Ashely Oudenne '11 this summer.