Meta-strategy for Learning Tuning Parameters with Guarantees

02/04/2021
by   Dimitri Meunier, et al.
0

Online gradient methods, like the online gradient algorithm (OGA), often depend on tuning parameters that are difficult to set in practice. We consider an online meta-learning scenario, and we propose a meta-strategy to learn these parameters from past tasks. Our strategy is based on the minimization of a regret bound. It allows to learn the initialization and the step size in OGA with guarantees. We provide a regret analysis of the strategy in the case of convex losses. It suggests that, when there are parameters θ_1,…,θ_T solving well tasks 1,…,T respectively and that are close enough one to each other, our strategy indeed improves on learning each task in isolation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset