Theoretical Analysis of Inductive Biases in Deep Convolutional Networks

05/15/2023
by   Zihao Wang, et al.
0

In this paper, we study the inductive biases in convolutional neural networks (CNNs), which are believed to be vital drivers behind CNNs' exceptional performance on vision-like tasks. We first analyze the universality of CNNs, i.e., the ability to approximate continuous functions. We prove that a depth of 𝒪(log d) is sufficient for achieving universality, where d is the input dimension. This is a significant improvement over existing results that required a depth of Ω(d). We also prove that learning sparse functions with CNNs needs only 𝒪̃(log^2d) samples, indicating that deep CNNs can efficiently capture long-range sparse correlations. Note that all these are achieved through a novel combination of increased network depth and the utilization of multichanneling and downsampling. Lastly, we study the inductive biases of weight sharing and locality through the lens of symmetry. To separate two biases, we introduce locally-connected networks (LCNs), which can be viewed as CNNs without weight sharing. Specifically, we compare the performance of CNNs, LCNs, and fully-connected networks (FCNs) on a simple regression task. We prove that LCNs require Ω(d) samples while CNNs need only 𝒪̃(log^2d) samples, which highlights the cruciality of weight sharing. We also prove that FCNs require Ω(d^2) samples while LCNs need only 𝒪̃(d) samples, demonstrating the importance of locality. These provable separations quantify the difference between the two biases, and our major observation behind is that weight sharing and locality break different symmetries in the learning process.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset