Towards Continual, Online, Unsupervised Depth

02/28/2021
by   Muhammad Umar Karim Khan, et al.
0

Although depth extraction with passive sensors has seen remarkable improvement with deep learning, these approaches may fail to obtain correct depth if they are exposed to environments not observed during training. Online adaptation, where the neural network trains while deployed, with unsupervised learning provides a convenient solution. However, online adaptation causes a neural network to forget the past. Thus, past training is wasted and the network is not able to provide good results if it observes past scenes. This work deals with practical online-adaptation where the input is online and temporally-correlated, and training is completely unsupervised. Regularization and replay-based methods without task boundaries are proposed to avoid catastrophic forgetting while adapting to online data. Experiments are performed on different datasets with both structure-from-motion and stereo. Results of forgetting as well as adaptation are provided, which are superior to recent methods. The proposed approach is more inline with the artificial general intelligence paradigm as the neural network learns the scene where it is deployed without any supervision (target labels and tasks) and without forgetting about the past. Code is available at github.com/umarKarim/cou_stereo and github.com/umarKarim/cou_sfm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset