Zeroing Neural Networks, an Introduction to, a Survey of, and Predictive Computations for Time-varying Matrix Problems

08/06/2020
by   Frank Uhlig, et al.
0

This paper is designed to increase knowledge and understanding of time-varying matrix problems and Zeroing Neural Networks in the numerical analysis community of the west. Zeroing neural networks (ZNN) were invented 20 years ago in China and almost all of their advances have been made in and still come from its birthplace. ZNN methods have become a backbone for solving discretized sensor driven time-varying matrix problems in real-time, in theory and in on-chip applications for robots, in control theory and engineering in general. They have become the method of choice for many time-varying matrix problems that benefit from or require efficient, accurate and predictive real-time computations. The typical ZNN algorithm needs seven distinct steps for its set-up. The construction of ZNN algorithms starts with an error equation and the stipulation that the error function decrease exponentially fast. The error function DE is then mated with a convergent look-ahead finite difference formula to create a derivative free computer code that predicts the future state of the system reliably from current and earlier state data. Matlab codes for ZNN typically consist of one linear equations solve and one short recursion of current and previous state data per time step. This makes ZNN based algorithms highly competitive with ODE IVP path following or homotopy methods that are not designed to work with constant sampling gap incoming sensor data but rather work adaptively. To illustrate the easy adaptability of ZNN and further the understanding of ZNN, this paper details the seven set-up steps for 11 separate time-varying problems and supplies the codes for six. Open problems are mentioned as well as detailed references to recent work on each of the treated problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset