I recently presented as part of a panel on Educating the Next Generation of AI-Enabled Musicians at the AES Symposium on AI and The Musician. The conference was organized by Jonathan Wyner and Christian J. Steinmetz and held at the Berklee College of Music in Boston from June 6-8th. The conference presented a variety of perspectives on Music and AI from composers, songwriters, professors, lawyers, machine learning researchers and software developers. It was incredible to meet so many like-minded individuals who were all sorting through both the incredible potential of AI as a powerful tool for creation and for the potential downsides and stress it may put on an already fragile music economy for independent creators.

Below, you will find a detailed description of my presentation, “Finding the Signal Through the Noise: Integrating Generative AI Into Higher Education Music Curriculum,” as well as the slides I presented at the conference.

Overview:

This presentation explores methods for incorporating Generative AI into music curriculum, focusing on discussion and experiential learning as pedagogical strategies to deepen students’ understanding and critical engagement with AI in music. I will discuss results and data from student surveys on Open Labs 2024: Music and Artificial Intelligence, a lecture and discussion series presented at the Center for Music Technology at West Chester University of Pennsylvania. Insights gleaned from these sessions underscore the importance of integrating AI education into music curriculum, equipping future music professionals with the knowledge and skills necessary to navigate the increasingly AI-driven landscape of music production and studio composition. 

Description: 

Upon graduation, music majors face an uphill battle to establish themselves in a highly competitive music industry. Aspiring artists, session musicians, composers, recording engineers, or industry professionals must assemble what Angela Beeching Miles calls a “portfolio career” built upon multiple income streams. Generative AI, which in 2022-2023 showed incredible potential for creative output in text and images, is now quickly gaining capabilities in audio and video, further complicating matters for emerging artists. 

In the near future, up-and-coming musicians may find themselves directly in competition with the outputs of generative AI, particularly in the fields of library music and composition for games and media. Additionally, soon-to-be graduates in creative fields can find themselves confused by competing narratives of AGI hyperbole on the one hand and outright dismissal and denial of generative AI’s capabilities on the other. In this landscape, Music Educators have an ethical imperative to help students find the signal through the noise and better understand the current capabilities and limitations of generative AI for music and audio. 

Within this context, I started Open Labs 2024: Music and Artificial Intelligence as a six-part lecture and discussion series at the Center for Music Technology at West Chester University of Pennsylvania. The series engaged students interested in music production and studio composition in critical discourse and hands-on exploration using generative AI systems. Each session delved into distinct aspects of AI in music, including an overview of current technologies, text-to-audio, MIDI generation, remixing applications, timbre transfer, and ethical and economic considerations. 

My ongoing research on machine learning applications for music composition since 2018 will serve as a foundation for my presentation. My paper, Technorealism and Music: Towards a Balanced View of Artificial Intelligence and Musical Composition, explores the implications of first-gen symbolic (Aiva) and hybrid algorithmic (Jukedeck) composition on the media composition landscape. 

My presentation will outline each of the six sessions of my Open Labs series, giving a template for educators and a roadmap for students looking to expand their knowledge of AI’s current state and future impacts on music creation. Additionally, my survey data from student participants would interest developers hoping to gain insight into what types of applications aspiring music creators found most helpful in their workflows and preferences regarding comprehensive web-based tools versus narrowly focused AU and VST plugins that can be inserted into standard DAWs. In conclusion, the convergence of musical creation and AI represents both a promising frontier and a potential challenge for emerging musicians entering the industry. The integration of generative AI into music creation processes introduces new possibilities for creativity but also raises questions about its implications for artistic expression, economic sustainability, and ethical considerations. In response to these challenges, initiatives like Open Labs 2024 provide invaluable platforms for critical discourse, hands-on exploration, and ethical reflection. By fostering a nuanced understanding of AI’s current capabilities and limitations, music educators play a crucial role in empowering students to navigate this evolving landscape with confidence and integrity.

Duotone Audio recently commissioned me to compose an original score for Samsung’s new “Recommended by the Pros Campaign.” The campaign features creatives from across the globe speaking about how Samsung AI-enhanced devices improve their workflows. I composed a modern percussion-driven score for their collaboration with Korean Director Kim Seong-Hun. Check it out below, and visit the Samsung YouTube page to see the full series of this campaign.

Original Music by Devin Arne 0:28-1:44. Composed for Duotone Audio LTD.

With the support of WCU, I had the pleasure of attending this year’s Audio Engineering Society’s annual European Conference, which was held at Aalto University just outside of Helsinki, Finland. I got to meet some really great people, hear some amazing talks and check out new technologies. I am currently deep in research into AI-powered systems that can be used to enhance media scoring and music production workflows, and it was so inspiring to learn about the latest advances in this field from folks at companies, including NeuralDSP and HarmonAI.

Here are some photos from the event:

As a new faculty member at WCU, I was honored to be invited to perform at the 33rd Annual WCU Jazz Festival with the WCU Faculty Jazz Ensemble. At the Madeleine Wing Adler Theatre, we performed standards and original compositions, including my composition “There and Back” and my arrangement of Jobim’s “O Grande Amor.” 

33rd Annual WCU Jazz Festival: WCU Faculty Concert

The WCU Faculty Jazz Ensemble:

Jonathan Ragonese, Director & Saxophone

John Swana, Trumpet

Dan Cherry, Trombone

Jeremy Jordan, Piano & Keyboard

Devin Arne, Electric Guitar

David Cullen, Acoustic Guitar

Peter Paulsen, Double Bass

Marc Jacoby, Percussion, Vibraphone

Christopher Hanning, Drum Set

This reality show features lifestyle tips from an etiquette expert.

I am thrilled to announce that my music has been featured in the first season of Mind Your Matters on Netflix. My track “Prime Time,” published by Lift Music, can be heard in the Episode Elevate My Life

As described in the Netflix Blog, Tudum, “Sara Jane Ho aims to help her clients create positive change in their lives. In Season 1 of Mind Your Manners, the etiquette expert works with six people from different backgrounds to develop their social skills, build confidence and improve their professional and personal relationships. Drawing on Chinese and Western perspectives, Ho offers a distinctive approach to navigating various social situations and cultivating good habits at home that serves as a foundation for personal growth.” 

Thanks to many herculean efforts made by my colleagues over the last several years, I am thrilled to share that The Wells School of Music at West Chester University now offers a Bachelor of Music in Studio Composition. 

I am honored to be a part of this unique, one-of-a-kind program at WCU. This is high-quality music education at an affordable price and in an ideal location, given West Chester’s proximity to Philadelphia, New York, and Washington D.C. The Bachelor of Music in Studio Composition is perfect for students who want to combine music production, entrepreneurship, and technology alongside composition and performance. At West Chester University, students have access to incredible faculty and facilities, including our Center for Music Technology as well as the opportunity to collaborate with a large and growing student body of musicians.

Check out the feature on the WCU News Page here

Studio Composition

I am thrilled to announce that I have accepted the position of Assistant Professor of Studio Composition at West Chester University of Pennsylvania. This brand new role will serve the recently created Bachelor’s Degree program where students will study music production, film and media scoring, songwriting, and music entrepreneurship. To say I am ecstatic is an understatement. I look forward to this wonderful new chapter filled with music, artistry, mentorship, community, and life in the greater Philadelphia area.

My sound installation Unheard Voices: The Embodied and Networked Intelligence of Plants was recently featured in The State Press, the ASU Student Newspaper.

“Pinecones rattling: Sound installation brings attention to environment” by Anna Campbell with photos by Alex Gould.

Speaker-Equipped Raspberry Pi Microcomputers inside of flower pots. Photo by Alex Gould

More information about Unheard Voices, including project description, video, and audio can be found here.