Library Header Image
LSE Research Online LSE Library Services

A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

Dadush, Daniel, Huiberts, Sophie, Natura, Bento and Végh, László A. (2020) A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix. In: Makarychev, Konstantin, Makarychev, Yury, Tulsiani, Madhur, Kamath, Gautam and Chuzhoy, Julia, (eds.) STOC 2020 - Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. Proceedings of the Annual ACM Symposium on Theory of Computing. Association for Computing Machinery, USA, pp. 761-774. ISBN 9781450369794

[img] Text (A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix) - Accepted Version
Download (723kB)

Identification Number: 10.1145/3357713.3384326


Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max cx, Ax = b, x ≥ 0, A g m × n, Vavasis and Ye developed a primal-dual interior point method using a g€layered least squares' (LLS) step, and showed that O(n3.5 log(χA+n)) iterations suffice to solve (LP) exactly, where χA is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χA∗, defined as the minimum χAD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2 n2 + n3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χAD ≤ n(χ∗)3. This algorithm also allows us to approximate the value of χA up to a factor n (χ∗)2. This result is in (surprising) contrast to that of Tunçel (Math. Prog. '99), who showed NP-hardness for approximating χA to within 2poly(rank(A)). The key insight for our algorithm is to work with ratios gi/gj of circuits of A - i.e., minimal linear dependencies Ag=0 - which allow us to approximate the value of χA∗ by a maximum geometric mean cycle computation in what we call the g€circuit ratio digraph' of A. While this resolves Monteiro and Tsuchiya's question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5 lognlog(χA∗+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm.

Item Type: Book Section
Official URL:
Divisions: Mathematics
Date Deposited: 06 Jul 2020 10:54
Last Modified: 20 Jan 2021 02:16

Actions (login required)

View Item View Item


Downloads per month over past year

View more statistics