Accelerated Methods for Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties
In this work, we study optimization problems of the form min_x max_y f(x, y), where f(x, y) is defined on a product Riemannian manifold ℳ×𝒩 and is μ_x-strongly geodesically convex (g-convex) in x and μ_y-strongly g-concave in y, for μ_x, μ_y ≥ 0. We design accelerated methods when f is (L_x, L_y, L_xy)-smooth and ℳ, 𝒩 are Hadamard. To that aim we introduce new g-convex optimization results, of independent interest: we show global linear convergence for metric-projected Riemannian gradient descent and improve existing accelerated methods by reducing geometric constants. Additionally, we complete the analysis of two previous works applying to the Riemannian min-max case by removing an assumption about iterates staying in a pre-specified compact set.
READ FULL TEXT