Spotify’s music-recommendation system has already become intelligent, thanks to machine learning that happens behind the scenes. But an academic researcher who’s been spending time at the company’s New York office has been looking at ways to use deep learning to surface songs based on their content.
If Spotify puts researcher Sander Dieleman’s work into production, the company could have a shot at improving user experience enough to help it stand out against competitors like Rdio, Songza, and Beats. And rather than merely playing music that people might already be familiar with, Spotify could become more of a tool for people to discover new or unpopular music.
“We want to be able to recommend new music right when it is released, and we want to tell listeners about awesome bands they have never heard of,” Dieleman wrote yesterday in an extensive blog post on his research. “To achieve these goals, we will need to use a different approach.”
Deep learning is a specialized sort of machine learning, one that involves training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.
In the past few years tech giants like Google, Microsoft, and Baidu have devoted resources to it. More recently, web companies like Facebook, Netflix, and Twitter have waded in to the area with acquisitions, major hires, and early research. Now we can add Spotify to the list of companies interested in improving their services with deep learning.
At Spotify, Dieleman is building on convolutional neural networks, a deep-learning technique that Facebook’s Yann Lecun popularized, particularly in association with images. Dieleman trained his neural networks on short slices of songs and then started creating playlists based on certain features they observed in the audio.
Playlists pick up on bass drums, vibrato singing, and ambiance, among other traits. But Dieleman went further and even gathered songs with a similar pitch, guitar distortion, and other qualities.
Dieleman thinks the neural networks might be starting to pick up on certain characteristics in music like the appearance of chord progressions in certain genres. If that is actually happening, it’d be pretty fascinating. Just imagine how fun it’d be to listen to a playlist of 10 songs with chord progressions kind of like the one in, say, “Baba O’Riley” by The Who.
And the system might even be picking up on the language of song lyrics, Dieleman wrote.
The neural networks have their issues, he acknowledged, although they’re not bad, considering that they only taking the music itself into consideration — and samples of songs, at that.
Now it’s up to Spotify to decide if putting the deep learning-based recommendation system into production, even if only to weed out bad recommendations.
“Spotify already uses a bunch of different information sources and algorithms in their recommendation pipeline, so the most obvious application of my work is simply to include it as an extra signal,” Dieleman wrote.
Certainly that could prove useful for Dieleman, who himself is a follower of progressive metal.
“One of my main goals with this work is to make it possible to recommend new and unpopular music. I hope that this will help lesser known and up and coming bands, and that it will level the playing field somewhat by enabling Spotify to recommend their music to the right audience,” he wrote.