Backward-Forward Algorithm: An Improvement towards Extreme Learning Machine

07/24/2019
by   Dibyasundar Das, et al.
11

Extreme learning machine (ELM), a randomized learning paradigm for a single hidden layer feed-forward network, has gained significant attention for solving problems in diverse domains due to its faster learning ability. The output weights in ELM are determined by an analytic procedure, while the input weights and biases are randomly generated and fixed during the training phase. The learning performance of ELM is highly sensitive to many factors such as the number of nodes in the hidden layer, the initialization of input weight and the type of activation functions in the hidden layer. Moreover, the performance of ELM is affected due to the presence of random input weight and the model suffers from ill posed problem. Hence, here we propose a backward-forward algorithm for a single feed-forward neural network that improves the generalization capability of the network with fewer hidden nodes. Here, both input and output weights are determined mathematically which gives the network its performance advantages. The proposed model provides an improvement over extreme learning machine with respect to the number of nodes used for generalization.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset