Fast Depth Estimation for View Synthesis
Disparity/depth estimation from sequences of stereo images is an important element in 3D vision. Owing to occlusions, imperfect settings and homogeneous luminance, accurate estimate of depth remains a challenging problem. Targetting view synthesis, we propose a novel learning-based framework making use of dilated convolution, densely connected convolutional modules, compact decoder and skip connections. The network is shallow but dense, so it is fast and accurate. Two additional contributions – a non-linear adjustment of the depth resolution and the introduction of a projection loss, lead to reduction of estimation error by up to 20 network outperforms state-of-the-art methods with an average improvement in accuracy of depth estimation and view synthesis by approximately 45 respectively. Where our method generates comparable quality of estimated depth, it performs 10 times faster than those methods.
READ FULL TEXT