Artistry in the Algorithms: Music in the Age of AI

Berklee alumni and professors speak to Berklee Today about the promise and peril of making music with AI. 

November 13, 2023

A few months ago, Associate Professor Ben Camp, who teaches in Berklee’s Songwriting Department, was working with a recent graduate on a tricky lyric for a new album. She wanted to convey the sense that she could create magic, and to urge listeners to get in touch with their own inner powers instead of being afraid of them, recalls Camp (they/them). “And the context for this line was like the mood of the witches that open Macbeth.” Later, Camp sent her a potential lyric: “In the witching hour’s hold / Do you hear the stories told? / Do you fear the magic yet to unfold?”

“I read it to her and she loved it,” says Camp, who says she only changed one word—“unfold” to “unfurl.” Camp then revealed that they actually hadn’t come up with the line—it was generated by ChatGPT, the large language model trained by artificial intelligence (AI). “She was like, ‘Oh my God, I hate you,’” Camp says, chuckling.

The anecdote captures the ambivalence songwriters and musicians are feeling about artificial intelligence, says Camp, who has begun introducing ChatGPT and other AI tools into their classes. “I’ve gotten everything from ‘I need to go home and throw out my computer,’ to ‘It’s okay, but it doesn’t have the spirit and soul a human has,’ to ‘Thank you for making me use AI. I did a way better job than I would have without it.’”

Ben Camp

Associate Professor Ben Camp

Image by Kelly Davidson

As generative AI and machine learning get better and better at mimicking human creativity—and at making original music—the technologies have been greeted by musicians and music fans alike with mixed emotions, including a healthy dose of fear. This spring, when an anonymous composer released the song “Heart on My Sleeve,” claiming they used AI to generate passable versions of the voices of Drake and the Weeknd, the so-called “Fake Drake” alarmed musicians everywhere, who feared that their talents could simply be ripped off. 

It was the second time in a matter of months that AI had crossed the line into what had been exclusively the territory of musicians and songwriters. In February, DJ David Guetta created a song using AI to generate lyrics along with the voice of Eminem. Getting ahead of the trend, in April Grimes gamely told creators they could freely use her voice in compositions “without penalty.” (That’s easy enough when you’re a multimillion-dollar artist who has children with the richest man in the world.) So far, such experiments seem little more than musical pranks. But as the technology continues to improve, it’s quickly forcing musicians and songwriters, including Berklee faculty and alumni, to grapple with its implications.

These are still early days. But there are a number of startups right now in an arms race to productize AI technology for music that we’ll start to see in six months to a year."

— Jonathan Bailey ’08, former chief technology officer of iZotope

Democratizing or Dumbing Down?

“These are still early days,” says Jonathan Bailey ’08, former chief technology officer of iZotope, a company using machine learning in part to create software for recording, mixing, and mastering music. “But there are a number of startups right now in an arms race to productize AI technology for music that we’ll start to see in six months to a year.” One of iZotope’s big breakthroughs was using AI to remove background noise from an audio recording—a feat that is normally hard to do. 

“Separating out the speech content from noise is really difficult, but it’s actually a pretty easy problem to solve using deep learning,” Bailey says. The same principles that can label sounds as speech or noise are now being applied to identify drums, guitars, and other instruments—and will eventually be able to generate those sounds, Bailey says. 

“If you’re a singer-songwriter, you can record your voice and guitar in your bedroom and then automatically enhance the quality to sound like you recorded it in a professional studio. Then what if you could add a bass track and drums to that? Right now, it seems in the realm of fantasy, but this could become more and more possible.”

In a sense, such technologies are an extension of the democratization of music recording, continuing the trend of software such as GarageBand or Autotune that have made advanced tools more accessible to amateur musicians. “It used to cost a million dollars to create an album, then $10,000, and now $1,000,” Bailey says. “To me, generative AI is just a point on that curve, the next evolution of enabling more people to be creative.”

That democratizing trend in AI doesn’t bother Mark Simos, a professor in Berklee’s Songwriting Department. “It’s part of a whole progression of technologies to help nonskilled musicians to be able to write songs,” he says. “That’s not likely to fundamentally change the nature of the industry.” On the other hand, he does see a danger in professional musicians using AI to generate songs out of whole cloth.

Mark Simos

Professor Mark Simos

Image by Louise Bichan

Decades ago, Simos worked in software with expert systems, a precursor to AI. More recently, he’s judged several AI songwriting contests, and has noticed a disturbing trend of musicians using AI as a shortcut to creativity. “When you ask ChatGPT to spit out a song on a certain topic in a certain study, that’s not how real songwriting works,” Simos says. “Much of real songwriting is nonlinear; it’s not just reaching into some oracular space in your head and out pops the song.”

What AI can’t replicate—at least not yet—is the combination of music and lyrics that makes a song truly great. “You see a lot of people saying, ‘We used this technology to generate lyrics, and this technology to generate a melody, and then kind of slapped it together,’” Simos says. “But the slapping it together part is where the magic really happens.” 

In songwriting classes, he says, faculty place a lot of emphasis on something called prosody. “It’s the way that all of the different parts of the song are telling the same story and communicating the same emotional truth,” Simos says. “That hasn’t been the focus of any of the AI songwriting research I’ve seen, but that’s the kind of stuff real songwriters and the people who teach real songwriters wrestle with.”

Simos worries that the increasing prevalence of AI music may degrade the quality of music as a whole. “You throw enough of that into the industry, and eventually you start eroding people’s ability to tell the difference.” He’s working on a project now to adapt his 2014 book Songwriting Strategies: A 360-Degree Approach for writing with AI, in order to help songwriters more realistically understand what the technology can and can’t do, and to suggest ways that AI may help the songwriting process without dumbing it down.

That’s the approach that his colleague Ben Camp has taken as well: intentionally introducing ChatGPT and other AI technology into his courses so students can learn to use them as tools rather than seeing them as threats. “There is a lot of fear that AI is going to take our jobs—but at least in its current form, AI is not self-directed, so it’s not going to do anything other than what a human tells it to,” Camp says. “It’s really important to me to make sure that my students become the humans who have those skills to know how to talk to AI and get those jobs.”

Sounding Board for Songwriting

At this point, Camp finds AI most helpful in the brainstorming phase of musical composition. “I think creatives of all fields are familiar with the problem where you’ve got like 80 percent of it figured out but there’s this one thing that stumps you,” they say. Maybe it’s a lyric that needs to rhyme with one you already have, or one that possesses a certain number of syllables while expressing a complicated emotional sentiment. That’s when Camp will turn to a large language model such as ChatGPT, which works by generating random variations on statistical prompts. “I’ll say, ‘Give me 20 options for this line,’” Camp says. “And if they all end up sounding the same, I’ll then say, ‘Give me 20 more options that have nothing to do with each other, to make them as diverse and strange and unexpected as possible.’”

Oftentimes, they say, that’s enough to get their creative juices flowing. “I’ll see something I never would have thought to come up with in a million years—and that’s not quite it, but it leads me to a solution.” As for the ethics of using a computer to help compose a song, Camp sees it as no different from the way musicians for centuries have looked to other songs for inspiration. “All of music is a history of chained thoughts and expressions leading up to where we are now,” Camp says. “The entire methodology of teaching music is to play a song, analyze it to see why it does the thing it does, and then be able to do that thing.”

Nevertheless, there is a gray area when it comes to credit. “It’s hard to know when it’s the artist’s duty to disclose,” Camp says, adding that they will probably note ChatGPT’s contribution to the lyrics for the song composed with their student.

As much as they are excited about the potential of AI technology for songwriting assistance, Camp worries, as Simos does, about the potential of AI to generate whole songs, especially when this capability is paired with a streaming service such as Spotify, which has the ability to use machine learning to curate a feed of customized music for listeners.

“When the drum machine emerged in the ’80s, people said we won’t need drummers. But drummers were able to manipulate this new technology, and together with the sampler it arguably created hip-hop."

— George Howard, professor of music business

“Algorithms are getting so good they create a very specific funnel that can become a bit of an echo chamber,” Camp says. Taken to extremes, that kind of individualized music generation could deprive us of the communal aspect that fosters connections among people who listen to music together. “Have you ever been to a concert where the entire crowd is singing songs back as one?” they say. “I shudder to think of a world in which everybody has their own songs, and the crowd couldn’t chant together the end of ‘Hey Jude’ for minutes on end.”

Holding the Model Accountable

That potential for personally generated music also worries George Howard, a professor in Berklee’s Music Business/Management Department. But his concern lies mostly with AI’s commercial implications. Cofounder of the media investment and advising company Acme Innovation, Howard has long worked to create technologies that give musicians and songwriters more control over their own music; these include blockchain technology that accurately and effectively tracks when and how songs are being used so that creators can be properly compensated.

For more than a decade, Howard has criticized music streaming services for the downward pressure they’ve put on payments to artists. “They’re making the end-user customer feel that music should be free,” he says. “I see it every semester with my students, how very few of them have any expectation of making any type of money from Spotify—and very few do.”

Professor George Howard

Professor George Howard

Image by Kelly Davidson

Meanwhile, Spotify and other services have curated playlists to the extent that listeners barely even know what artist they are listening to. “[Streaming services] would rather have users searching for playlists of music to cook to, to relax to, to have sex to, or whatever,” Howard says. “As AI-generated music becomes—while not necessarily equal to human-made music—sufficient, in certain cases, we can’t be putting our heads in the sand.” 

The issue, he says, is not so much that AI-generated music can mimic established artists such as Drake or Eminem—it’s that the massive datasets of music AI uses to train its generative models are entirely composed of music that people created and own. Existing copyright law, says Howard, is clear that you are not allowed to create music based on someone else’s song. “That’s known as a derivative work, and the copyright code stipulates you can’t do that,” says Howard, who is trained as a copyright attorney. “You have to get permission from the original owner.” 

As AI scrapes the internet for songs to use in its training models, he says, it is “infringing like crazy.” The question is how to stop it. Since the training models are virtual black boxes, it’s very difficult for an individual artist to know when generative AI technology is using their work, and how. Howard predicts that eventually there will be massive class-action lawsuits trying to get compensation for artists and labels. “Some plaintiff’s attorney will find someone who can say, ‘Yeah, my music’s in it and I didn’t give them authorization,’ and they’ll build a lawsuit out of that.”

The problem, of course, is that such a lawsuit will take time to wend its way through the courts, and an outcome favorable to human creators and owners isn’t guaranteed. In Howard’s assessment, it would be more effective to pursue an entrepreneurial approach to the problem. He’s working on a company that would proactively license music that could then be used to train an AI model. Using blockchain technology, it would then be possible to tell exactly how much the model relied on particular songs to generate its music, enabling compensation for the original artists who created the works.

But so far, Howard has struggled to get people in the industry to focus on the issue. “I’m pretty much alone in the entrepreneurial world in building this, and I’m competing against people who don’t care about it at all,” he says. “I’m putting my own money and time into it, because it feels very desperate and dire to me.”

Whatever happens with the music industry, it’s clear that AI will inevitably alter it, and in ways that may not even be clear at present. Other industries have faced threats from technology that have transformed their trajectories. “Photography was supposed to kill painting, but it allowed painting to become something different, more abstract and impressionistic,” says Bailey. Camp notes that “[computer-generated imagery] scared animators, but it led to Pixar.”

Even in the music industry, technology has spawned new opportunities. “When the drum machine emerged in the ’80s, people said we won’t need drummers,” says Howard. “But drummers were able to manipulate this new technology, and together with the sampler it arguably created hip-hop. Technology is generally sort of agnostic—it’s really the intent that goes into it that determines how it will be used.” As AI continues to make its presence felt in the music industry, that intent is everything—determining how music will respond to this new technology, and what we all will be listening to in the years to come. 

By Bots, For Bots: A New Music Ecosystem

John Sabillon

Jon Sabillón M.A. '18

If you go to the website for Jon Sabillón’s record label, Aigg, you probably won’t be able to listen to any of the music. That’s because the site is protected by a “reverse captcha” requiring visitors to prove that they are not human to enter. “My thesis at Berklee was to create the world’s first music label for robots,” says Sabillón M.A. ’18, who graduated from Berklee’s campus in Valencia, Spain, with a master’s degree in global entertainment and music business. “Machines make the music—and are also the audience.”

In part a thought experiment to explore the boundaries of artificially generated music, the label might be ahead of its time. Sabillón figures if robots are going to become more prevalent in the future, they should have a way to entertain themselves. “I think that there will be a degree of nonhuman culture that is perpetuated in the future, and I can’t imagine them not becoming conscious,” he says. “To that end, there should be something servicing these new entities.”

He admits that his venture is often greeted with fear by his human colleagues—especially his fellow musicians—but he hopes to help people ultimately overcome that emotion. “Humans have never been the only ones to make music,” he says, noting that birds and other animals have long played their own melodies and beats—and robots should be no different.

In 2019, the label created its first AI artist, Rey1, which Sabillón guided as it created its own unique form of music, including a futuristic-sounding synthesized string composition called “Theophany.” The artist even performed in its own online show last year. “When people actually experience it, they are amazed,” he says. “They can’t believe a nonhuman made it—they are like, ‘I thought this was years off.’” Admittedly, not all of Rey1’s songs are hits. “I would not consider 90 percent of them commercial-ready,” Sabillón says. But you could say the same thing about the Beatles or Rihanna, he adds. We only hear the polished compositions of our favorite artists, not their failed ideas.

As for the threat that AI presents to human performers, Sabillón takes the radical view that AI models don’t owe anything to the humans whose works it uses to generate music. After all, everyone is influenced by someone. “Say you learned to play the piano by only listening to Elton John,” he says. “You buy all his songbooks and learn all his songs, and then you make original music that is obviously influenced by Elton John. If you want to pay him, sure—but you owe him nothing.” Robots, Sabillón argues, shouldn’t be treated by different rules.

Fear of an AI music takeover is driven by economic concerns, and probably rightly so, he says. “People will lose their jobs because of this. It’s just inevitable.” And it’s probably only a matter of time, he says, before AI can create music as well as humans can. “Ultimately, your sense of why you’re making music has to go beyond skill or wanting to make a living,” Sabillón says. “This might sound overly poetic, but for me and many of the people I went to Berklee with, making music has to do with a communion with the eternal, something at its core that is ineffable and difficult to talk about, but it’s why we are moved by music and why we do it.” 

Related Categories