Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis
Anderson acceleration is widely used for accelerating the convergence of fixed-point methods x_k+1=q(x_k), x_k ∈ℝ^n. We consider the case of linear fixed-point methods x_k+1=M x_k+b and obtain polynomial residual update formulas for AA(m), i.e., Anderson acceleration with window size m. We find that the standard AA(m) method with initial iterates x_k, k=0, …, m defined recursively using AA(k), is a Krylov space method. This immediately implies that k iterations of AA(m) cannot produce a smaller residual than k iterations of GMRES without restart (but without implying anything about the relative convergence speed of (windowed) AA(m) versus restarted GMRES(m)). We introduce the notion of multi-Krylov method and show that AA(m) with general initial iterates {x_0, …, x_m} is a multi-Krylov method. We find that the AA(m) residual polynomials observe a periodic memory effect where increasing powers of the error iteration matrix M act on the initial residual as the iteration number increases. We derive several further results based on these polynomial residual update formulas, including orthogonality relations, a lower bound on the AA(1) acceleration coefficient β_k, and explicit nonlinear recursions for the AA(1) residuals and residual polynomials that do not include the acceleration coefficient β_k. We apply these results to study the influence of the initial guess on the asymptotic convergence factor of AA(1).
READ FULL TEXT