Shallow Unorganized Neural Networks using Smart Neuron Model for Visual Perception
The recent success of Deep Neural Networks (DNNs) has revealed the significant capability of neuromorphic computing in many challenging applications. Although DNNs are derived from emulating biological neurons, there still exist doubts over whether or not DNNs are the final and best model to emulate the mechanism of human intelligence. In particular, there are two discrepancies between computational DNN models and the observed facts of biological neurons. First, human neurons are interconnected randomly, while DNNs need carefully-designed architectures to work properly. Second, human neurons usually have a long spiking latency ( 100ms) which implies that not many layers can be involved in making a decision, while DNNs could have hundreds of layers to guarantee high accuracy. In this paper, we propose a new computational neuromorphic model, namely shallow unorganized neural networks (SUNNs), in contrast to DNNs. The proposed SUNNs differ from standard ANNs or DNNs in three fundamental aspects: 1) SUNNs are based on an adaptive neuron cell model, Smart Neurons, that allows each neuron to adaptively respond to its inputs rather than carrying out a fixed weighted-sum operation like the neuron model in ANNs/DNNs; 2) SUNNs cope with computational tasks using only shallow architectures; 3) SUNNs have a natural topology with random interconnections, as the human brain does, and as proposed by Turing's B-type unorganized machines. We implemented the proposed SUNN architecture and tested it on a number of unsupervised early stage visual perception tasks. Surprisingly, such shallow architectures achieved very good results in our experiments. The success of our new computational model makes it a working example of Turing's B-Type machine that can achieve comparable or better performance against the state-of-the-art algorithms.
READ FULL TEXT