Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Artificial intelligence (AI) is becoming reasonably proficient at generating novel music, at least if semi-coherent piano melodies and holiday jingles are your thing. It’s not half bad at penning verses to accompany with it, either. Kind of. Sort of. Not really.
Folks writing for Packt Publishing — via Hacker Noon — recently released a step-by-step guide showing how a neural network — in essence, layers of mathematical functions that loosely mimic the behavior of neurons in the brain — can be used to generate new, original lyrics in the style of any artist. Ostensibly.
Their algorithm of choice is a long-short-term memory (LSTM) network, a type of recurrent neural network capable of learning long-term dependencies. The larger the training dataset, the better the results, generally speaking; for their demonstration, the authors sourced a text file of lyrics from 10,000 songs.
You can’t feed raw rhymes into the AI system; a bit of preprocessing is required. As the tutorial’s authors explain, the lyrics data is used to build the vocabulary mapping, which is further transformed by one-hot encoding — a process by which categorical variables (in this case, words) are converted into integer data.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
After crafting a machine learning model in Keras — an open source neural network library written in Python — and Google’s TensorFlow machine learning framework, and storing the weights and bias values that over time adjust the strength of the network’s synaptic connections, the folks at Packt Publishing fed it song lyrics and kicked off training. Once the model reached the desired accuracy, they tasked it with brainstorming new rhymes.
The results were … well, not entirely pleasant:
Yeah, oho once upon a time, on ir intasd
I got monk that wear your good
So heard me down in my clipp
Cure me out brick
Coway got baby,
I wanna sheart in faic
I could sink awlrook and heart your all feeling in the firing of to the still hild, gavelly mind, have before you, their lead
Oh, oh shor,s sheld be you und make
Oh, fseh where sufl gone for the runtome
Weaaabe the ligavus I feed themust of hear
Others have had better luck. This Medium writer tapped an open source LSTM network — textgenrnn — to generate Taylor Swift lyrics. Here’s some of the AI’s handywork:
i ‘ m not your friends
and it rains when you ‘ re not speaking
but you think tim mcgraw
and i ‘ m pacing down
i ‘ m comfortable
i ‘ m not a storm in mind
you ‘ re not speaking
and i ‘ m not a saint
and i ‘ m standin ‘ t know you ‘ re
i ‘ m wonderstruck
and you ‘ re gay
And earlier this year, intrepid developers used a recurrent neural net conditioned on a range of artists, including ABBA, in an attempt to produce something somewhat usable. Their results were better than most, but they concede that their AI model might have memorized some of the lines in the training dataset:
Oh, my love it makes me close a thing
You’ve been heard,
I must have waited I hear you
So I say
Thank you for the music, that makes me cry
There’s no question that AI’s getting better at parsing natural language. But songwriters can rest easy, it’s safe to say.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.