Non-convex Optimization via Adaptive Stochastic Search for End-to-End Learning and Control

06/22/2020
by   Ioannis Exarchos, et al.
0

In this work we propose the use of adaptive stochastic search as a building block for general, non-convex optimization operations within deep neural network architectures. Specifically, for an objective function located at some layer in the network and parameterized by some network parameters, we employ adaptive stochastic search to perform optimization over its output. This operation is differentiable and does not obstruct the passing of gradients during backpropagation, thus enabling us to incorporate it as a component in end-to-end learning. We study the proposed optimization module's properties and benchmark it against two existing alternatives on a synthetic energy-based structured prediction task, and further showcase its use in stochastic optimal control applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset