Unsupervised Learning of Dense Optical Flow and Depth from Sparse Event Data

09/23/2018
by   Chengxi Ye, et al.
2

In this work we present unsupervised learning of depth and motion from sparse event data generated by a Dynamic Vision Sensor (DVS). To tackle this low level vision task, we use a novel encoder-decoder neural network architecture that aggregates multi-level features and addresses the problem at multiple resolutions. A feature decorrelation technique is introduced to improve the training of the network. A non-local sparse smoothness constraint is used to alleviate the challenge of data sparsity. Our work is the first that generates dense depth and optical flow information from sparse event data. Our results show significant improvements upon previous works that used deep learning for flow estimation from both images and events.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset