Abstract
This study examines the perceptual boundaries between human-produced and AI-generated music through a controlled listening experiment involving 120 participants from the Sichuan Conservatory of Music. Participants evaluated 720 excerpts across six genres (Western Classical, Jazz, Rock, Pop, R&B and Chinese Traditional), with a proportional number of AI and human tracks. The AI music was generated using Suno AI and Udio AI, while the human tracks were sourced from commercial recordings. Identification accuracy was strongly genre- and model-dependent: Chinese Traditional and Western Classical were most often classified correctly, Jazz and R&B showed the lowest AI detectability, and human-produced Rock was frequently misidentified as AI-generated. Prior experience with AI music creation was associated with improved AI detection, indicating a familiarity effect. These results show that authorship judgements are shaped by sonic features, listener background and evolving AI capabilities, and that the direction of misattribution is an informative outcome. The study recommends genre-specific benchmarks for model evaluation, routine reporting of confusion matrices and longitudinal tracking so that AI development progress can be assessed against real listening behaviour. The study also supports institutional policy that pairs hands-on generation training with feature-level listening. Findings contribute to music cognition, media psychology and debates on creativity in an era of algorithmic production.
Keywords
AI-generated music, perceptual bias of music, human-AI authorship expectancy, Suno AI, Udio AI
How to Cite
Longardner, J., (2026) “A Multi-Genre Study of Identification and Style Bias of AI-Generated Music”, Journal of Creative Music Systems 10(1). doi: https://doi.org/10.5920/jcms.1704
424
Views
138
Downloads