Derivative free optimization via repeated classification

04/11/2018
by   Tatsunori B. Hashimoto, et al.
0

We develop an algorithm for minimizing a function using n batched function value measurements at each of T rounds by using classifiers to identify a function's sublevel set. We show that sufficiently accurate classifiers can achieve linear convergence rates, and show that the convergence rate is tied to the difficulty of active learning sublevel sets. Further, we show that the bootstrap is a computationally efficient approximation to the necessary classification scheme. The end result is a computationally efficient derivative-free algorithm requiring no tuning that consistently outperforms other approaches on simulations, standard benchmarks, real-world DNA binding optimization, and airfoil design problems whenever batched function queries are natural.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset