This story originally appeared in Fanfare Spring 2025.
How is artificial intelligence affecting the liberal arts and humanities? A Night with Northwestern in San Francisco brought together faculty members Daniel Shanahan (Bienen), Özge Samancı (Communication), and James Lee (Medill) for a panel discussion on this important topic, moderated by Weinberg College alumnus Michael Spinella, director of content strategy at Adobe and member of the Northwestern Libraries Board of Governors.
The event related to two of the University’s priorities: “Harness the power of data analytics and artificial intelligence” and “Enhance the creative and performing arts.” Shanahan, who is associate professor of music theory and cognition and sits on Northwestern’s data science and AI steering committee, spoke about some of the ways AI is affecting the theory and practice of computational music research, as well as its role in the social aspects of experiencing and making music.
“Maybe being human is about more than being efficient,” Shanahan said while discussing software that promises to make learning an instrument easier. He argued that, for artists, the process is often as important as the end result. Shanahan shares more reflections, below.
Tell us about your work related to artificial intelligence.
Shanahan: Broadly speaking, I explore the computational modeling of the musical experience, and AI is a large part of that story. There is a long history, going back to the 1940s, and it’s full of so many fascinating characters, failed experiments, and ideas that have been kind of forgotten. There is a link to be made between these early attempts to squish the complex musical experience into something a computer could parse.
In my class on computational music analysis, we extract data from audio files and notated scores, ask humanities-type questions, and work on what it means for various musical features to be analyzable and meaningful when looked at with a computer. I also teach a class focusing on how these features of musical experience are treated as data points.
What is AI doing well in music, and where is it struggling?
Shanahan: AI is pretty good at replicating popular music styles—harmonic progressions, vocal inflections, melodies, timbres, etc. For example, if you want a lo-fi track or a somewhat inoffensive pop jingle, AI models can do that. It’s more difficult to draw outside the lines, and there’s still quite a bit of overfitting, which happens when a model is trained to perform very well on one thing but struggles when asked to generalize on a slightly different question. For example, asking for a romantic pop song in an early 1980s style would return something that sounds eerily like Journey’s “Don’t Stop Believin’,” and often with very similar lyrics. In my class, we recently discussed the rise of “Spotify-core,” which refers to music optimized for streaming. AI can fake Spotify-core somewhat well!
It’s also important to note that models were trained on the intellectual property of artists who haven’t really been compensated, and many of the bigger AI music generation companies are being sued.
How have the social aspects of music making changed in the face of AI?
Shanahan: I think and hope that music making will always be a social experience. A hundred years ago, people would be quite shocked to find you listening to music alone, because music is inherently social. Now algorithms can create a highly individuated experience—recommendations can seem like a perfect fit—and it obviously follows that this very personalized musical experience means the act of listening is less of a shared social experience. Nevertheless, people are great at finding ways to be social with music even if the environment has changed. TikTok duets, for example, show how people can creatively work in a dialogue with other musicians in a digital space.
What excites you about AI?
Shanahan: For about 50 years, there have been maybe two holy grails in music information retrieval. The first is with symbolic data, or scores: the ability to accurately run character recognition on handwritten manuscripts, or even to read something with enough accuracy so that it takes less time to scan in than it does to type by hand. The second has to do with audio data, this notion of polyphonic source separation. If you have multiple voices singing something, humans are quite good at hearing them as separate voices, but computers historically have been very bad. However, in just the past five years, we’ve made huge strides in solving each of these problems. It’s a very exciting time on that front, and AI will likely facilitate a great deal of future humanities research.