Smoothed Online Optimization for Regression and Control

10/23/2018
by   Gautam Goel, et al.
0

We consider Online Convex Optimization (OCO) in the setting where the costs are m-strongly convex and the online learner pays a switching cost for changing decisions between rounds. We show that the recently proposed Online Balanced Descent (OBD) algorithm is constant competitive in this setting, with competitive ratio 3 + O(1/m), irrespective of the ambient dimension. Additionally, we show that when the sequence of cost functions is ϵ-smooth, OBD has near-optimal dynamic regret and maintains strong per-round accuracy. We demonstrate the generality of our approach by showing that the OBD framework can be used to construct competitive algorithms for a variety of online problems across learning and control, including online variants of ridge regression, logistic regression, maximum likelihood estimation, and LQR control.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset