Deep Music Information Dynamics

Abstract

Generative musical models often comprise of multiple levels of structure, presuming that the process of composition moves between background to foreground, or between generating musical surface and some deeper and reduced representation that governs hidden or latent dimensions of music.  In this paper we are using a recently proposed framework called Deep Musical Information Dynamics (DMID) to explore information contents of deep neural models of music through rate reduction of latent representation streams, which is contrasted with hight rate information dynamics of the musical surface. This approach is partially motivated by rate-distortion theories of human cognition, providing a framework for exploring possible relations between imaginary anticipations existing in the listener's or composer's mind, and the information dynamics of the sensory (acoustic) or symbolic score data. In the paper the DMID framework is demonstrated using several experiments with symbolic (MIDI) and acoustic (spectral) music representations. We use variational encoding to learn a latent representation of the musical surface. This embedding is further reduced using a bit-allocation method into a second stream of low bit-rate encoding. The combined loss includes temporal information in terms of predictive properties for each encoding stream, and accuracy loss measured in terms of mutual information between the encoding at low rate and the high rate surface representations. For the case of counterpoint, we also study the mutual information between two voices in a musical piece at different levels of information reduction.The DMID framework allows to explore aspects of computational creativity in terms of juxtaposition of latent/imaginary surprisal aspects of deeper structure with music surprisal on the surface level, done in a manner that is quantifiable and computationally tractable. The relevant information theory modeling and analysis methods are discussed in the paper, suggesting that a trade off between compression and prediction play an important factor in the analysis and design of creative musical systems.

Keywords

Deep Neural Networks for Music, Machine Learning in Music, Music Information Dynamics

How to Cite

Dubnov, S., Chen, K. & Huang, K., (2022) “Deep Music Information Dynamics”, Journal of Creative Music Systems 1(1). doi: https://doi.org/10.5920/jcms.894

806

Views

382

Downloads

Share

Authors

Shlomo Dubnov (UCSD)
Ke Chen (UCSD)
Kevin Huang (UCSD)

Download

Issue

Dates

Licence

Creative Commons Attribution 4.0

Identifiers

Peer Review

This article has been peer reviewed.

File Checksums (MD5)

  • Camera ready manuscript: 716cd3af9a8de2ead2b13836c7c391bf