Revisiting Self-Supervised Monocular Depth Estimation

03/23/2021
by   Ue-Hwan Kim, et al.
0

Self-supervised learning of depth map prediction and motion estimation from monocular video sequences is of vital importance – since it realizes a broad range of tasks in robotics and autonomous vehicles. A large number of research efforts have enhanced the performance by tackling illumination variation, occlusions, and dynamic objects, to name a few. However, each of those efforts targets individual goals and endures as separate works. Moreover, most of previous works have adopted the same CNN architecture, not reaping architectural benefits. Therefore, the need to investigate the inter-dependency of the previous methods and the effect of architectural factors remains. To achieve these objectives, we revisit numerous previously proposed self-supervised methods for joint learning of depth and motion, perform a comprehensive empirical study, and unveil multiple crucial insights. Furthermore, we remarkably enhance the performance as a result of our study – outperforming previous state-of-the-art performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset