Deep-learning to Predict Perceived Differences to a Hidden Reference Image

12/06/2018
by   Mojtaba Bemana, et al.
0

Image metrics predict the perceived per-pixel difference between a reference image B and its degraded (e. g., re-rendering) version A. We devise a neural network architecture and training procedure that allows predicting the MSE, SSIM or VGG16 image difference A-B from the distorted image A alone, when the reference B is not observed. This is enabled by two insights: The first is to inject sufficiently many un-distorted natural image patches, which can be had in arbitrary amounts and are known to have no perceivable difference to themselves. This avoids false positives. The second is to balance the learning, where it is carefully made sure that all image errors are equally likely, avoiding false negatives. Surprisingly, we observe, that the resulting non-paired metric, subjectively, can even perform better than the paired one, as it had to become robust to misalignments. We demonstrate the effectiveness of our approach quantitatively and in a user study, as well as qualitatively, in an image-based rendering application where we predict difference visibility maps.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset