Learning Acoustic Scattering Fields for Dynamic Interactive Sound Propagation

10/10/2020
by   Zhenyu Tang, et al.
0

We present a novel hybrid sound propagation algorithm for interactive applications. Our approach is designed for dynamic scenes and uses a neural network-based learned scattered field representation along with ray tracing to generate specular, diffuse, diffraction, and occlusion effects efficiently. We use geometric deep learning to approximate the acoustic scattering field using spherical harmonics. We use a large 3D dataset for training, and compare its accuracy with the ground truth generated using an accurate wave-based solver. The additional overhead of computing the learned scattered field at runtime is small and we demonstrate its interactive performance by generating plausible sound effects in dynamic scenes with diffraction and occlusion effects. We demonstrate the perceptual benefits of our approach based on an audio-visual user study.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset