Homepage of Boris Haase




Previous | Next

#73: Improvement Linear Programming on 14.07.2018

In the following section, we solve linear programmes (LPs) by the simplex and the intex method (inter-/extrapolation) as strongly polynomial solution to Smale's 9th problem.

Diameter theorem for polytopes: The diameter of an n-dimensional polytope defined by m constraints with m, n ∈ c≥2 is at most 2(m + n - 3).

Proof: We can assemble at most m - 1 hyperplanes into an incomplete cycle (of dimension 2) and have to consider n - 2 alternatives sidewards (in the remaining dimensions). Since we can pass each section with a maximum of two edges, the factor is 2. This theorem can be extended to polyhedra analogously by dropping the requirement of finiteness.⃞

Theorem: The simplex method is not strongly polynomial.

Proof and algorithm: Let P := {x ∈ cn : Ax ≤ b, b ∈ cm, A ∈ cm×n, m, n ∈ cℕ*} be the feasible domain of the LP max {dTx : d ∈ cn, x ∈ P}. By taking the dual or setting x := x+ - x- with x+, x- ≥ 0, we obtain x ≥ 0. We first solve max {-z : Ax - b ≤ (z, , z)Tcm, z ≥ 0} to obtain a feasible x when b ≥ 0 does not hold. Initial and target value are z := |min {b1, ..., bm}| and z = 0. We begin with x := 0 as in the first case. Pivoting if necessary, we may assume that b ≥ 0.

Let i, j, k ∈ cℕ* and let aiT the i-th row vector of A. If dj ≤ 0 for all j, the LP is solved. If for some dj > 0 (dj = 0), aij ≤ 0 for all i, the LP is positively unbounded (for now, we may drop dj and A.j as well as bi and ai, but only when aij < 0 holds). The inequality aijxj ≥ 0 > bi for all j has no solution, too. If necessary, divide all aiTx ≤ bi by ||ai|| and all dj and aij by the minimum of |aij| such that aij ≠ 0 for each j. This will be reversed later. If necessary, renormalise by ||ai||.

We may always remove redundant constraints (with ai ≤ 0). Select for each dj > 0 and non-base variable xj the minimum ratio bk/akj for aij > 0. The variables with * are considered in the next step. The next potential vertex is given by xj* = xj + bk/akj for feasible x*. To select the steepest edge, select the pivot akj corresponding to xj that maximises dTΔx/||Δx|| or dj2/(1 + ||A.j||2) for Δx := x* - x in the k-th constraint.

If there are multiple maxima, select maxk,j djbk/akj according to the rule of best pivot value or alternatively (less well) the smallest angle min Σj dj*/||d*||. If we cannot directly maximise the objective function, we relax (perturb) the constraints with bi = 0 by the same, minimal modulus. These do not need to be written into the tableau: We simply set bi = ||ai||.

If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier bi by ||ai||. The cost of eliminating a multiple vertex, after which we revert the relaxation, corresponds to an LP with d > 0 and b = 0. Along the chosen path, the objective function increases (effectively) strictly monotonically. We can then simply calculate dj*, aij* and bi* using the rectangle rule (cf. [775], p. 63).

In the worst-case scenario, the simplex method is not strongly polynomial despite the diameter theorem for polytopes under any given set of pivoting rules, since an exponential "drift" can be constructed with Klee-Minty or Jeroslow polytopes, or others, creating a large deviation from the shortest path by forcing the selection of the least favourable edge. This is consistent with existing proofs. The result follows.⃞

Theorem: Let wlog η := maxi,j {-bi, dj} > 0. The intex method solves every solvable LP in O(mn) ≤ mnqh/Δh for a small q ∈ cℕ* and a height h ∈ ]0, η].

Proof and algorithm: We compute the LP min {h ∈ c≥0 : bTy - dTx ≤ h, 0 ≤ x ∈ cn, 0 ≤ y ∈ cm, Ax - b ≤ (h, , h)Tcm, d - ATy ≤ (h, , h)Tcn} via the (dual) programme min {bTy : 0 ≤ y ∈ cm, ATy ≥ d} for the (primal) programme max {dTx : d ∈ cn, x ∈ P≥0}. Let v := (xT, yT)Tcm+n and goal h = 0. The case η ≤ 0 is solved by v := 0. We normalise and scale bTy - dTx ≤ 0, Ax ≤ b and ATy ≥ d. We sort all scalar products dTai, -dj, bTA.j and bi.

We compute x and analogously y from some equations aiTx = bi or xj = 0 with the largest corresponding dTai or -dj. Initial value for h is the minimum needed for the inequalities to be valid. We successively interpolate all vk* := (max vk + min vk)/2 (a few times) and determine the direction u ∈ cm+n for uk := Δvk Δhk/min Δhk, after solving all LPs min hk in hk, vkc≥0. We solve min h in h, s ∈ c≥0 for v* := v + su and fix v.

The bisection method solves the two-dimensional LPs in O(mn) and O(m + n). After interpolating anew, we extrapolate (vT, h)T through (v*T, h*)T in O(m + n) into the boundary of the polytope. We compute (approaching the optimum) h as the minimum of all bi - aiTx, A.jTy - dj and dTx - bTy. By the bisection method, we solve the two-dimensional LP min h in h, t ∈ c≥0 for v* := v + tr, rT ∈ {(0T, A.jT), (-aiT, 0T), (-aiT, A.jT), (dT, -bT)} and fix v in O(m + n), too.

We may be able to do without the two-dimensional LPs. If x and y are optimal or if h cannot be minimised anymore, we stop the procedure. When we cannot leave v, we relax all constraints except v ≥ 0 a bit and undo that at the end of one iteration step. Since Δh is for each iteration step in O(mn) fix (!) or even more than half of h, the claim follows by the strong duality theorem ([775], p. 60 - 65).⃞

Remarks: Simplex method and face algorithm ([916], p. 580 f.) may solve an LP faster for small m and n. We can easily change the current stock of constraints or variables, because the intex method is one of the non-transforming methods and currently the fastest known (worst-case) LP-solving algorithm. We simply have to adjust h if necessary, because we can initialise additional variables with 0.

Corollary: Every solvable linear system (LS) Ax = b for x ∈ cn can be solved in O(mn) if we write it as LP min {h ∈ c≥0 : ±(Ax - b) ≤ (h, , h)Tcm}.⃞

Corollary: The LP min {h ∈ c≥0 : ±(Ax - λx) ≤ (h, , h)Tcn} can determine an eigenvector x ∈ cn \ {0} of the matrix A ∈ cn×n for the eigenvalue λ ∈ cℝ in O(n2) if it is solvable.⃞

Corollary: Let αj the j-th column vector of the matrix A-1cn×n and let δij the Kronecker delta. Then every LS Aαj = (δ1j, ..., δnj)T to determine the inverse A-1 of the regular matrix A can be solved for j = 1, ..., n in O(n2). Whether A is regular can be also determined in O(n2).⃞

Remarks: The three corollaries can be easily transferred to complex ones. The intex method is numerically very stable, since the initial data are only a little altered, and can be solved if modified by distributed computing in O(1). It may also be applied to (mixed) integer problems and branch-and-bound, in particular for nonconvex optimisation. It can easily be extended to convex programmes (with a vector-valued objective function f1):

Corollary: Every solvable convex programme min {f1(x) : x ∈ cn, (f2(x), , fm(x))T ≤ 0} where the ficℝ are convex functions is strongly polynomial and may be solved by the intex method and two-dimensional bisection or Newton's methods, which handle the fi in O(p), where p ∈ cℕ* denotes the number of operands xj of the fi, assuming that an x exists so that fi(x) < 0 for all i > 1 (see [939], p. 589 ff.).⃞

Further thoughts: Gomory or Lenstra cuts can find an integer solution of the original problem in polynomial time if we additionally assume that A, b, and d are integers wlog and that m and n are fixed. By dualising, a full-dimensional LP may be obtained as described before within the intex method. This shows that the problem of (mixed) integer linear programming is not NP-complete:

Theorem: (Mixed) integer LPs may be solved in polynomial time.⃞

code of simplex method

© 14.07.2018 by Boris Haase


Valid XHTML 1.0 • disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top