GANPOP: Generative Adversarial Network Prediction of Optical Properties from Single Snapshot Wide-field Images

06/12/2019
by   Mason T. Chen, et al.
0

We present a deep learning framework for wide-field, content-aware estimation of absorption and scattering coefficients of tissues, called Generative Adversarial Network Prediction of Optical Properties (GANPOP). Spatial frequency domain imaging is used to obtain ground-truth optical properties from in vivo human hands, freshly resected human esophagectomy samples and homogeneous tissue phantoms. Images of objects with either flat-field or structured illumination are paired with registered optical property maps and are used to train conditional generative adversarial networks that estimate optical properties from a single input image. We benchmark this approach by comparing GANPOP to a single-snapshot optical property (SSOP) technique, using a normalized mean absolute error (NMAE) metric. In human gastrointestinal specimens, GANPOP estimates both reduced scattering and absorption coefficients at 660 nm from a single 0.2 /mm spatial frequency illumination image with 58 higher accuracy than SSOP. When applied to both in vivo and ex vivo swine tissues, a GANPOP model trained solely on human specimens and phantoms estimates optical properties with approximately 43 indicating adaptability to sample variety. Moreover, we demonstrate that GANPOP estimates optical properties from flat-field illumination images with similar error to SSOP, which requires structured-illumination. Given a training set that appropriately spans the target domain, GANPOP has the potential to enable rapid and accurate wide-field measurements of optical properties, even from conventional imaging systems with flat-field illumination.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset