Free AI Music Generators Are Pioneering a New Era
The tsunami started in April 2023, as Drake released his hotly anticipated single (Ft The Weeknd), Heart on my Sleeve. It immediately received critical acclaim. Problem is… Drake had nothing to do with it; someone had used a free AI music generator to create a synthetic version of Drake’s voice, mixed it with a synthetic The Weeknd voice, and made a song. It must have taken just a few hours, but it will forever change music.
What happened next was predictable. The song went viral, and Drake’s label stepped in to get it removed. Too late. Pandora’s box has been opened, and there is no closing it. Voice cloning tools are getting better fast. Very fast. Today, about anyone can create a realistic version of speaking voices… and singing voices.
Some may argue that synthetic songs are just the natural evolutions of mashups, which DJs have been creating for years. That this is just a novelty, and will not become ubiquitous. Some gimmicks like the cover below do indeed belong to the “mashup” category, but they’re not the ones we’re discussing here. We’re talking about “fakes”. About Software Eating the World. About the end of stardom’s scarcity, and the labels’ fight to stop this inevitability.
Scarcity is what gives artists value; there is only one Taylor Swift, one Beyoncé, one Stromae. If we multiply them, their worth diminishes. Sure, a few smart marketers may make millions by creating copycats before getting sued into the ground, but that will be through trickery. That’s why labels and artists are scared of AI music — it threatens their livelihood. In fact, it threatens their very raison d’être.
And so, the lawyers have jumped into action. There is no law to say one cannot train an AI on voices, so the big labels have gone straight to the source, asking streaming services to block AI Companies from accessing their songs. Meanwhile, Youtube and the Universal Music Group (UMG) created an AI incubator to better understand and tackle “the issue”.
They are no doubt working to create something similar to the Content ID software that automatically ensures royalties get paid when a song is played on Youtube. When a recognizable (famous?) voice is “heard”, the software would take note and demonetize the video, as it does today with songs. But what happens if someone just happens to sound like Drake or Taylor Swift? Would Youtube give royalties to the Universal Music Group nonetheless, further alienating content creators? What about covers? Do they become illegal? Will our voices be tracked on the internet, much like our faces are on the street? A new era of music is beginning, and if we don’t pay attention, it may well not be for the best.
As always in next-stage capitalism, it won’t be fair. While Youtube argues that artists’ voices should be protected, Google (Youtube’s owner) will continue using everyone’s content to train their next big algorithm to power Search. And the creators of that content? They won’t receive a dime.
In fact, no one will be given the same rights and protections as the big names. While they are protected by an army of tech and lawyers, it’s almost assured that algorithms will devour Indie bands’ content to learn from them and help labels pump out better, more commercially successful songs for their golden poneys. I’d put my money on Spotify, but Apple is also a good bet.
From one AI song, we now see the next battlefield for music. Some will have their voices trademarked, others will have theirs stolen. Freddie Mercury allegedly said “Do whatever you want with my music, just don’t make it boring”. Our only way out is if more people recognise that as the path forward, rather than the alternative. Don’t hold your breath; the alternative has money.
Good luck out there.