Can we learn gradients by Hamiltonian Neural Networks?

10/31/2021
by   Aleksandr Timofeev, et al.
0

In this work, we propose a meta-learner based on ODE neural networks that learns gradients. This approach makes the optimizer is more flexible inducing an automatic inductive bias to the given task. Using the simplest Hamiltonian Neural Network we demonstrate that our method outperforms a meta-learner based on LSTM for an artificial task and the MNIST dataset with ReLU activations in the optimizee. Furthermore, it also surpasses the classic optimization methods for the artificial task and achieves comparable results for MNIST.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset