Classification of Infant Crying in Real-World Home Environments Using Deep Learning

05/12/2020
by   Xuewen Yao, et al.
0

In the domain of social signal processing, automated audio recognition is a promising avenue for researchers interested in accessing daily behaviors that can contribute to wellbeing and mental health outcomes. However, despite remarkable advances in mobile computing and machine learning, audio behavior detection models are largely constrained to data collected in controlled settings such as labs or call centers. This is problematic as it means their performance is unlikely to adequately generalize to real-world applications. In the current paper, we present a model combining deep spectrum and acoustic features to detect and classify infant distress vocalizations in real-world data. To develop our model, we collected and annotated a large dataset of over 780 hours of real-world audio data via a wearable audio recorder worn by infants for up to 24 hours in their natural home environments. Our model has F1 score of 0.597 relative to an F1 score of 0.166 achieved by real-world state-of-practice infant distress classifiers and an F1 score of 0.26 achieved by state-of-the-art, real-world infant distress classifiers published last year in Interspeech's paralinguistic challenge. Impressively, it also achieves an F1 score within 0.1 of state-of-the-art infant distress classifiers developed and tested using laboratory quality data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset