A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

12/12/2019
by   Daniel Dadush, et al.
0

Following the breakthrough work of Tardos in the bit-complexity model, Vavasis and Ye gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max c^ x, Ax = b, x ≥ 0, A ∈R^m × n, Vavasis and Ye developed a primal-dual interior point method using a 'layered least squares' (LLS) step, and showed that O(n^3.5log (χ̅_A+n)) iterations suffice to solve (LP) exactly, where χ̅_A is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya, noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χ̅^*_A, defined as the minimum χ̅_AD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m^2 n^2 + n^3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χ̅_AD≤ n(χ̅^*)^3. This algorithm also allows us to approximate the value of χ̅_A up to a factor n (χ̅^*)^2. As our second main contribution, we develop a scaling invariant LLS algorithm, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n^2.5log nlog (χ̅^*_A+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/log n improvement on the iteration complexity bound of the original Vavasis-Ye algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset