Entanglement Entropy of Target Functions for Image Classification and Convolutional Neural Network

10/16/2017
by   Ya-Hui Zhang, et al.
0

The success of deep convolutional neural network (CNN) in computer vision especially image classification problems requests a new information theory for function of image, instead of image itself. In this article, after establishing a deep mathematical connection between image classification problem and quantum spin model, we propose to use entanglement entropy, a generalization of classical Boltzmann-Shannon entropy, as a powerful tool to characterize the information needed for representation of general function of image. We prove that there is a sub-volume-law bound for entanglement entropy of target functions of reasonable image classification problems. Therefore target functions of image classification only occupy a small subspace of the whole Hilbert space. As a result, a neural network with polynomial number of parameters is efficient for representation of such target functions of image. The concept of entanglement entropy can also be useful to characterize the expressive power of different neural networks. For example, we show that to maintain the same expressive power, number of channels D in a convolutional neural network should scale with the number of convolution layers n_c as D∼ D_0^1/n_c. Therefore, deeper CNN with large n_c is more efficient than shallow ones.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset