Asymptotic properties of one-layer artificial neural networks with sparse connectivity
A law of large numbers for the empirical distribution of parameters of a one-layer artificial neural networks with sparse connectivity is derived for a simultaneously increasing number of both, neurons and training iterations of the stochastic gradient descent.
READ FULL TEXT