In James Gutierrez’s 40,000 Years of Music Technology class, students create an AI-generated song, giving them experience with tools that are infiltrating the recording industry — and raising existential questions.
A central argument of the Northeastern University class “40,000 Years of Music Technology” is that hand-wringing over new technology’s impact on the creation and performance of music is just about as old as music itself.
“John Philip Sousa was famously against the phonograph,” says James Gutierrez, an assistant teaching professor of music who has taught the course since 2021. The legendary big-band composer thought that recordings of instruments and vocals would discourage people from making music themselves.
The latest in that long tradition of tech-related musical worries, which has touched on everything from autotune to electricity itself, is the impact of artificial intelligence. So last year Gutierrez incorporated it into the syllabus, giving students an assignment to create a song with AI tools.
“This is a moment in history,” Gutierrez says. “AI is already in every area of music production: concept generation, prompting lyrics, prompting musical ideas, mastering recordings. Over 150,000 new songs are published on Spotify every day, and it’s doubtless that more and more of them [use AI] at some point in their development. [These students] will help decide the protocols of acceptable and unacceptable use, which is an open question right now.”
For the assignment, students first generate lyrics using a large language model like ChatGPT. They input those to a music generation program called Suno AI, selecting properties including genre (country, EDM, hip hop) and vocal style, based on one of a wide selection of popular singers. Based on those prompts, Suno AI will generate a composition with a basic pop song structure; along with it, students submit an open-ended, written response outlining their emotional reaction to the process.
“My goal is for them to start thinking through their responsibility with these tools,” Gutierrez says.
He says that most students report feeling uneasy afterward, pointing to the “uncanny valley” nature of the resulting songs.
“Sometimes it’s very, very close to what a human would produce, but it’s not quite there yet. That difference makes them uneasy. Even when they’re very well done and you can’t tell it was done by AI, they should be able to tell, or that they don’t like the fact that now they could be duped by something they think is human made.”
That opens up a conversation about ownership: “Is it important for the meaning that you derive from a song that is actually produced by a human 100%? 90%? 50%? What is that threshold?” Gutierrez asks. “It turns out we don’t really know.”
He adds that while pondering such existential questions, the students have fun, too. AI currently fares better in some genres than others, and many compositions are unintentionally hilarious. “It can’t rap very well. Not yet, anyway,” he says.
One student prompted Suno for a country song about using AI to write a song — a metatextual interpretation that proved an awkwardly high concept for a genre known for addressing more literal, everyday life scenarios.
“Hearing it talk about tech and generative AI and these other cutting-edge ideas was so strange,” Gutierrez says. “It was more of a critique of the genre and our limited understanding of it than of AI.”