Non-stationary Projection-free Online Learning with Dynamic and Adaptive Regret Guarantees

05/19/2023
∙
by   Yibo Wang, et al.
∙
0
∙

Projection-free online learning has drawn increasing interest due to its efficiency in solving high-dimensional problems with complicated constraints. However, most existing projection-free online methods focus on minimizing the static regret, which unfortunately fails to capture the challenge of changing environments. In this paper, we investigate non-stationary projection-free online learning, and choose dynamic regret and adaptive regret to measure the performance. Specifically, we first provide a novel dynamic regret analysis for an existing projection-free method named BOGD_IP, and establish an 𝒊(T^3/4(1+P_T)) dynamic regret bound, where P_T denotes the path-length of the comparator sequence. Then, we improve the upper bound to 𝒊(T^3/4(1+P_T)^1/4) by running multiple BOGD_IP algorithms with different step sizes in parallel, and tracking the best one on the fly. Our results are the first general-case dynamic regret bounds for projection-free online learning, and can recover the existing 𝒊(T^3/4) static regret by setting P_T = 0. Furthermore, we propose a projection-free method to attain an 𝒊Ėƒ(τ^3/4) adaptive regret bound for any interval with length τ, which nearly matches the static regret over that interval. The essential idea is to maintain a set of BOGD_IP algorithms dynamically, and combine them by a meta algorithm. Moreover, we demonstrate that it is also equipped with an 𝒊(T^3/4(1+P_T)^1/4) dynamic regret bound. Finally, empirical studies verify our theoretical findings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset