The effect of the choice of neural network depth and breadth on the size of its hypothesis space

06/06/2018
by   Lech Szymanski, et al.
0

We show that the number of unique function mappings in a neural network hypothesis space is inversely proportional to ∏_lU_l!, where U_l is the number of neurons in the hidden layer l.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro