Deep Predictive Models in Interactive Music
Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning can allow data-driven models with a long memory of past states. We advocate for predictive musical interaction, where a predictive model is embedded in a musical interface, assisting users by predicting unknown states of musical processes. We propose a framework for incorporating such predictive models into the sensing, processing, and result architecture that is often used in musical interface design. We show that our framework accommodates deep generative models, as well as models for predicting gestural states, or other high-level musical information. We motivate the framework with two examples from our recent work, as well as systems from the literature, and suggest musical use-cases where prediction is a necessary component.
READ FULL TEXT