Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction

11/29/2016
by   Richard Zhang, et al.
0

We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset