AES Symposium on AI and The Musician

I recently presented as part of a panel on Educating the Next Generation of AI-Enabled Musicians at the AES Symposium on AI and The Musician. The conference was organized by Jonathan Wyner and Christian J. Steinmetz and held at the Berklee College of Music in Boston from June 6-8th. The conference presented a variety of perspectives on Music and AI from composers, songwriters, professors, lawyers, machine learning researchers and software developers. It was incredible to meet so many like-minded individuals who were all sorting through both the incredible potential of AI as a powerful tool for creation and for the potential downsides and stress it may put on an already fragile music economy for independent creators.

Below, you will find a detailed description of my presentation, “Finding the Signal Through the Noise: Integrating Generative AI Into Higher Education Music Curriculum,” as well as the slides I presented at the conference.

Overview:

This presentation explores methods for incorporating Generative AI into music curriculum, focusing on discussion and experiential learning as pedagogical strategies to deepen students’ understanding and critical engagement with AI in music. I will discuss results and data from student surveys on Open Labs 2024: Music and Artificial Intelligence, a lecture and discussion series presented at the Center for Music Technology at West Chester University of Pennsylvania. Insights gleaned from these sessions underscore the importance of integrating AI education into music curriculum, equipping future music professionals with the knowledge and skills necessary to navigate the increasingly AI-driven landscape of music production and studio composition. 

Description: 

Upon graduation, music majors face an uphill battle to establish themselves in a highly competitive music industry. Aspiring artists, session musicians, composers, recording engineers, or industry professionals must assemble what Angela Beeching Miles calls a “portfolio career” built upon multiple income streams. Generative AI, which in 2022-2023 showed incredible potential for creative output in text and images, is now quickly gaining capabilities in audio and video, further complicating matters for emerging artists. 

In the near future, up-and-coming musicians may find themselves directly in competition with the outputs of generative AI, particularly in the fields of library music and composition for games and media. Additionally, soon-to-be graduates in creative fields can find themselves confused by competing narratives of AGI hyperbole on the one hand and outright dismissal and denial of generative AI’s capabilities on the other. In this landscape, Music Educators have an ethical imperative to help students find the signal through the noise and better understand the current capabilities and limitations of generative AI for music and audio. 

Within this context, I started Open Labs 2024: Music and Artificial Intelligence as a six-part lecture and discussion series at the Center for Music Technology at West Chester University of Pennsylvania. The series engaged students interested in music production and studio composition in critical discourse and hands-on exploration using generative AI systems. Each session delved into distinct aspects of AI in music, including an overview of current technologies, text-to-audio, MIDI generation, remixing applications, timbre transfer, and ethical and economic considerations. 

My ongoing research on machine learning applications for music composition since 2018 will serve as a foundation for my presentation. My paper, Technorealism and Music: Towards a Balanced View of Artificial Intelligence and Musical Composition, explores the implications of first-gen symbolic (Aiva) and hybrid algorithmic (Jukedeck) composition on the media composition landscape. 

My presentation will outline each of the six sessions of my Open Labs series, giving a template for educators and a roadmap for students looking to expand their knowledge of AI’s current state and future impacts on music creation. Additionally, my survey data from student participants would interest developers hoping to gain insight into what types of applications aspiring music creators found most helpful in their workflows and preferences regarding comprehensive web-based tools versus narrowly focused AU and VST plugins that can be inserted into standard DAWs. In conclusion, the convergence of musical creation and AI represents both a promising frontier and a potential challenge for emerging musicians entering the industry. The integration of generative AI into music creation processes introduces new possibilities for creativity but also raises questions about its implications for artistic expression, economic sustainability, and ethical considerations. In response to these challenges, initiatives like Open Labs 2024 provide invaluable platforms for critical discourse, hands-on exploration, and ethical reflection. By fostering a nuanced understanding of AI’s current capabilities and limitations, music educators play a crucial role in empowering students to navigate this evolving landscape with confidence and integrity.

Posted in