Constrained Parameter Inference as a Principle for Learning

03/22/2022
by   Nasir Ahmad, et al.
0

Learning in biological and artificial neural networks is often framed as a problem in which targeted error signals guide parameter updating for more optimal network behaviour. Backpropagation of error (BP) is an example of such an approach and has proven to be a highly successful application of stochastic gradient descent to deep neural networks. However, BP relies on the global transmission of gradient information and has therefore been criticised for its biological implausibility. We propose constrained parameter inference (COPI) as a new principle for learning. COPI allows for the estimation of network parameters under the constraints of decorrelated neural inputs and top-down perturbations of neural states. We show that COPI is not only more biologically plausible but also provides distinct advantages for fast learning, compared with the backpropagation algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset