Second Order Path Variationals in Non-Stationary Online Learning

05/04/2022
by   Dheeraj Baby, et al.
0

We consider the problem of universal dynamic regret minimization under exp-concave and smooth losses. We show that appropriately designed Strongly Adaptive algorithms achieve a dynamic regret of Õ(d^2 n^1/5 C_n^2/5∨ d^2), where n is the time horizon and C_n a path variational based on second order differences of the comparator sequence. Such a path variational naturally encodes comparator sequences that are piecewise linear – a powerful family that tracks a variety of non-stationarity patterns in practice (Kim et al, 2009). The aforementioned dynamic regret rate is shown to be optimal modulo dimension dependencies and poly-logarithmic factors of n. Our proof techniques rely on analysing the KKT conditions of the offline oracle and requires several non-trivial generalizations of the ideas in Baby and Wang, 2021, where the latter work only leads to a slower dynamic regret rate of Õ(d^2.5n^1/3C_n^2/3∨ d^2.5) for the current problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset