Metappearance: Meta-Learning for Visual Appearance Reproduction

04/19/2022
by   Michael Fischer, et al.
0

There currently are two main approaches to reproducing visual appearance using Machine Learning (ML): The first is training models that generalize over different instances of a problem, e.g., different images from a dataset. Such models learn priors over the data corpus and use this knowledge to provide fast inference with little input, often as a one-shot operation. However, this generality comes at the cost of fidelity, as such methods often struggle to achieve the final quality required. The second approach does not train a model that generalizes across the data, but overfits to a single instance of a problem, e.g., a flash image of a material. This produces detailed and high-quality results, but requires time-consuming training and is, as mere non-linear function fitting, unable to exploit previous experience. Techniques such as fine-tuning or auto-decoders combine both approaches but are sequential and rely on per-exemplar optimization. We suggest to combine both techniques end-to-end using meta-learning: We over-fit onto a single problem instance in an inner loop, while also learning how to do so efficiently in an outer-loop that builds intuition over many optimization runs. We demonstrate this concept to be versatile and efficient, applying it to RGB textures, Bi-directional Reflectance Distribution Functions (BRDFs), or Spatially-varying BRDFs (svBRDFs).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset