Contrastive Learning for Lifted Networks

05/07/2019
by   Christopher Zach, et al.
0

In this work we address supervised learning via lifted network formulations. Lifted networks are interesting because they allow training on massively parallel hardware and assign energy models to discriminatively trained neural networks. We demonstrate that training methods for lifted networks proposed in the literature have significant limitations, and therefore we propose to use a contrastive loss to train lifted networks. We show that this contrastive training approximates back-propagation in theory and in practice, and that it is superior to the regular training objective for lifted networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset