Stochastic Descent Analysis of Representation Learning Algorithms

12/18/2014
by   Richard M. Golden, et al.
0

Although stochastic approximation learning methods have been widely used in the machine learning literature for over 50 years, formal theoretical analyses of specific machine learning algorithms are less common because stochastic approximation theorems typically possess assumptions which are difficult to communicate and verify. This paper presents a new stochastic approximation theorem for state-dependent noise with easily verifiable assumptions applicable to the analysis and design of important deep learning algorithms including: adaptive learning, contrastive divergence learning, stochastic descent expectation maximization, and active learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset