Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

What do Elvis, Bing Crosby, and William Shatner have in common? They’ve all released Christmas albums, of course. And now, thanks to the enterprising folks at Made by AI, a startup that uses machine learning algorithms to create jewelry designs, AI has joined the ranks of artists who’ve tapped the holiday spirit.

This month as part of a “Christmas hack,” Made by AI leveraged a neural network — layers of mathematical functions that loosely mimic the behavior of neurons in the brain — to create a tune-generating tool it’s made freely available. Enter a duration (in seconds), choose your instrument from one of three (glockenspiel, bells, or clarinet), and it’s off to the races — in 40 seconds per minute of duration (up to a maximum of 2 minutes), the AI does its best to approximate a Christmas-y melody. You’ll receive a link via email when it finishes producing a new sample.

Judging by its Soundcloud album, the neural net won’t be topping the Billboard charts anytime soon. Still, there’s detectable structure in its work — the product of lots of training, the dev team explained, and a marginally sophisticated system under the hood.

When they set about building the Christmas song generator, the team first had to select an algorithm capable of generating long, decently coherent sequences without too much compute overhead. They eventually settled on a long short term memory (LSTM) network, a type of recurrent neural network capable of learning long-term dependencies.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

To train it, they sourced a dataset of a hundred Christmas songs in MIDI format — i.e., text files containing the notes and length and loudness of each note — and used Music21, an open source library, to read and write them. Over time, as the LSTM ingested the MIDI files, it slowly “learned” to produce semblances of themes by replicating sequences of notes and chords.

According to the team, fully optimizing and training the model took about three hours on an Amazon Web Services GPU spot instance (with an Nvidia V100-SXM2). They toyed with the idea of creating models that could generate song lyrics and train from raw audio input, but ultimately decided to leave that to future work.

“Overall, we are satisfied with the results,” the team wrote. “[We] encourage others to try to generate Christmas music with other input data and other models.”

The tool’s debut comes a little less than a week after contributors to Project Magenta, a Google Brain project “exploring the role of machine learning as a tool in the creative process,” presented their work on Musical Transformer, a machine learning model capable of generating relatively coherent songs with recognizable repetition.

Google is far from the only company using AI to generate head-banging jams, though — the long and growing list includes IBM, Jukedeck, Melodrive, Amper Music, and even Spotify.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.