Bayesian representation learning with oracle constraints

06/16/2015
by   Theofanis Karaletsos, et al.
0

Representation learning systems typically rely on massive amounts of labeled data in order to be trained to high accuracy. Recently, high-dimensional parametric models like neural networks have succeeded in building rich representations using either compressive, reconstructive or supervised criteria. However, the semantic structure inherent in observations is oftentimes lost in the process. Human perception excels at understanding semantics but cannot always be expressed in terms of labels. Thus, oracles or human-in-the-loop systems, for example crowdsourcing, are often employed to generate similarity constraints using an implicit similarity function encoded in human perception. In this work we propose to combine generative unsupervised feature learning with a probabilistic treatment of oracle information like triplets in order to transfer implicit privileged oracle knowledge into explicit nonlinear Bayesian latent factor models of the observations. We use a fast variational algorithm to learn the joint model and demonstrate applicability to a well-known image dataset. We show how implicit triplet information can provide rich information to learn representations that outperform previous metric learning approaches as well as generative models without this side-information in a variety of predictive tasks. In addition, we illustrate that the proposed approach compartmentalizes the latent spaces semantically which allows interpretation of the latent variables.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset