Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.
Do you get down to Jackson Five, or is Stravinsky more your style? Artificial intelligence (AI) that predicts taste in music might seem stranger than fiction, but researchers at Jönköping University in Sweden and Maastricht University in the Netherlands believe they’ve cracked the code.
In a paper published on the preprint server Arxiv.org, the team described a system that considers a person’s listening behaviors and, using machine learning algorithms and psychological models, infers their “musical sophistication.”
“Psychological models are increasingly being used to explain … behavioral traces,” they wrote. “The use of domain dependent psychological models allows for more fine-grained identification of behaviors [like music listening] and provides a deeper understanding behind the occurrence of those behaviors.”
In this context, musical sophistication refers to “musical skills, expertise, achievements, and related behaviors across a range of facets.” Studies have shown, the researchers pointed out, that people with a higher degree of musical sophistication are more musically skilled, and in general tend to engage in more “musical behaviors,” like practicing an instrument or listening to a variety of musical genres.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
They collected data through an app that tapped Spotify’s API, allowing them to retrieve users’ playlists and audio features like liveliness, energy, danceability, tempo, time signature, loudness, track popularity, and artist popularity. They also had participants answer questions from Goldmiths Musical Sophistication Index (Gold-MSI) — specifically, questions related to active engagement (how much time and money a person spends on music) and emotions (behaviors related to emotional responses to music).
The deluge of data fed into a neural network — an AI system consisting of processing nodes that model neurons in the human brain — that predicted 61 study subjects’ emotions and active musical engagement with a high degree of accuracy. Compared to the baseline, it was 95 percent accurate at predicting the former and 93 percent accurate at predicting the latter.
In future, the team plans to conduct additional, larger studies and explore the prediction of other subscales of the Gold-MSI, including singing abilities, perceptual abilities, and musical training.
“Our results show that music listening behavior can be used to infer the musical sophistication of users,” the researchers wrote.
It’s not the first time data scientists have attempted to predict musical preferences and tastes with machine learning.
At the Amsterdam Dance Event Tech conference in 2017, a team presented Hitwizard, a system trained to forecast popular songs. By taking into account features like beats per minute, valence, and tempo and comparing them against data sourced from Spotify charts and Dutch radio stations, it was able to predict hit tracks with 66 percent accuracy (and flops with 93 percent accuracy).
This year, engineers at Amazon leveraged AI to predict users’ musical tastes based on playback duration.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.