Frank Wolfe Meets Metric Entropy

05/17/2022
by   Suhas Vijaykumar, et al.
0

The Frank-Wolfe algorithm has seen a resurgence in popularity due to its ability to efficiently solve constrained optimization problems in machine learning and high-dimensional statistics. As such, there is much interest in establishing when the algorithm may possess a "linear" O(log(1/ϵ)) dimension-free iteration complexity comparable to projected gradient descent. In this paper, we provide a general technique for establishing domain specific and easy-to-estimate lower bounds for Frank-Wolfe and its variants using the metric entropy of the domain. Most notably, we show that a dimension-free linear upper bound must fail not only in the worst case, but in the average case: for a Gaussian or spherical random polytope in ℝ^d with poly(d) vertices, Frank-Wolfe requires up to Ω̃(d) iterations to achieve a O(1/d) error bound, with high probability. We also establish this phenomenon for the nuclear norm ball. The link with metric entropy also has interesting positive implications for conditional gradient algorithms in statistics, such as gradient boosting and matching pursuit. In particular, we show that it is possible to extract fast-decaying upper bounds on the excess risk directly from an analysis of the underlying optimization procedure.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset