Creating A Multi-track Classical Musical Performance Dataset for Multimodal Music Analysis: Challenges, Insights, and Applications
We introduce a dataset for facilitating audio-visual analysis of musical performances. The dataset comprises a number of simple multi-instrument classical music pieces assembled from coordinated but separately recorded performances of individual tracks. For each piece, we provide the musical score in MIDI format, the audio recordings of the individual tracks, the audio and video recording of the assembled mixture, and ground-truth annotation files including frame-level and note-level transcriptions. We describe our methodology for the creation of this dataset, particularly highlighting our approaches for addressing the challenges involved in maintaining synchronization and naturalness. We compare this dataset with existing widely used music audio datasets on the synchronization quality and show its high quality. We anticipate that the dataset will be useful for the development and evaluation of many existing music information retrieval (MIR) tasks, as well as many novel multi-modal tasks. On this end, we benchmark this dataset with existing music audio datasets using two existing MIR tasks (multi-pitch analysis and score-informed source separation). We also define two novel multi-modal MIR tasks (visually informed multi-pitch analysis and polyphonic vibrato analysis), provide evaluation measures and baseline systems for future comparisons. Finally, we propose several emerging research directions that can be supported by this dataset.
READ FULL TEXT