In the following section, we solve linear programmes (LPs). The well-known simplex method is described, together with the strongly polynomial normal method as solution of Smale's 9th problem. We may generalise the normal method to (non-) convex programmes (with vector-valued objective function). The diameter theorem for polytopes is proven. It is shown that (mixed) integer LPs are polynomial.

Proof and algorithm: Let P := {x ∈ ^{c}ℝ^{n} : Ax ≤ b, b ∈ ^{c}ℝ^{m}, A ∈ ^{c}ℝ^{m×n}, m, n ∈ ^{c}ℕ*} be the feasible domain of the LP max {d^{T}x : d ∈ ^{c}ℝ^{n}, x ∈ P}. By taking the dual or setting x := x^{+} - x^{-} with x^{+}, x^{-} ≥ 0, we obtain x ≥ 0. We first solve max {-z : Ax - b ≤ (z, …, z)^{T} ∈ ^{c}ℝ^{m}, z ≥ 0} to obtain a feasible x when b ≥ 0 does not hold. Initial and target value are z := |min {b_{1}, ..., b_{m}}| resp. z = 0 and we begin with x := 0 as in the first case. Pivoting if necessary, we may assume that b ≥ 0.

Let i, j, k ∈ ^{c}ℕ* and let a_{i}^{T} the i-th row vector of A. If d_{j} ≤ 0 for all j, the LP is solved. If for some d_{j} > 0 (d_{j} = 0), a_{ij} ≤ 0 for all i, the LP is positively unbounded (for now, we may drop d_{j} and A_{.j} as well as b_{i} and a_{i}, but only when a_{ij} < 0 holds). Otherwise, we divide all a_{i}^{T}x ≤ b_{i} by ||a_{i}|| and all d_{j} and a_{ij} by the minimum of |a_{ij}| such that a_{ij} ≠ 0 for each j to improve the runtime performance, if possible. This will be reversed later. If necessary, renormalise by ||a_{i}||.

In each step, we can remove multiple constraints and such with a_{i} ≤ 0, since they are redundant (avoidable by adding an extra slack variable in each case). The second case is analogous. If in both cases b_{i} = 0 and a_{i} ≥ 0 for some i, then the LP has maximum 0 and solution x = 0 if b ≥ 0, otherwise it has no solutions. In each step, for each d_{j} > 0 and non-base variable x_{j}, we select the minimum ratio b_{k}/a_{kj} for a_{ij} > 0.

The variables with * are considered in the next step. The next potential vertex is given by x_{j}* = x_{j} + b_{k}/a_{kj} for feasible x*. To select the steepest edge, select the pivot a_{kj} corresponding to x_{j} that maximises d^{T}(x* - x)/||x* - x|| resp. d_{j}^{2}/(1 + ||A_{.j}||^{2}) in the k-th constraint. If there are multiple maxima, select max d_{j}b_{k}/a_{kj} according to the rule of best pivot value or alternatively (less well) the smallest angle min Σ d_{j}*/||d*||.

If there are more than n values b_{i} equal 0 and we cannot directly maximise the objective function, we relax (perturb) the constraints with b_{i} = 0 by the same, minimal modulus. These do not need to be written into the tableau: We simply set b_{i} = ||a_{i}||. If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier b_{i} by ||a_{i}||.

The cost of eliminating a multiple vertex, after which we revert the relaxation, corresponds to the cost of solving an LP with b = 0. The same task is potentially required at the end of the process when checking whether the LP admits any other solutions if at least two of the d_{j} are 0. Along the chosen path, the objective function increases (effectively) strictly monotonically. We can then simply calculate d_{j}*, a_{ij}* and b_{i}* using the rectangle rule.

In the worst-case scenario, the simplex method is not strongly polynomial despite the diameter formula for polytopes (see below) under any given set of pivoting rules, since an exponential "drift" can be constructed with Klee-Minty or Jeroslow polytopes, or others, creating a large deviation from the shortest path by forcing the selection of the least favourable edge. This is consistent with existing proofs. The result follows.⃞

Theorem: The normal method solves the LP in O(mn), if it is solvable.

Proof and algorithm: We compute the LP min {h ∈ ^{c}ℝ_{≥0} : b^{T}y - d^{T}x ≤ h, 0 ≤ x ∈ ^{c}ℝ^{n}, 0 ≤ y ∈ ^{c}ℝ^{m}, Ax - b ≤ (h, …, h)^{T} ∈ ^{c}ℝ^{m}, d - A^{T}y ≤ (h, …, h)^{T} ∈ ^{c}ℝ^{n}} via the (dual) programme min {b^{T}y : 0 ≤ y ∈ ^{c}ℝ^{m}, A^{T}y ≥ d} for the (primal) programme max {d^{T}x : d ∈ ^{c}ℝ^{n}, x ∈ P_{≥0}}. First, we normalise b^{T}y - d^{T}x ≤ 0, Ax ≤ b and A^{T}y ≥ d. We compute all scalar products d^{T}a_{i} resp. -d_{j} and sort them. Let v := (x^{T}, y^{T})^{T} ∈ ^{c}ℝ^{m+n}. We start with v := 0.

We compute x and analogously y from some equations a_{i}^{T}x = b_{i} resp. x_{j} = 0 with the largest corresponding d^{T}a_{i} resp. -d_{j}. Initial value for the *height* h is the minimum needed for the inequalities to be valid. If x and y are optimal, we stop. Otherwise, we successively define a few times all v_{k}* := (max v_{k} + min v_{k})/2. We solve the two-dimensional LPs min h_{k} in h_{k}, v_{k} ∈ ^{c}ℝ_{≥0}. We determine a favourable direction u ∈ ^{c}ℝ^{m+n} with u_{k} := Δv_{k} Δh_{k}/min Δh_{k}.

Thereafter, we solve the two-dimensional LP min h in h, t ∈ ^{c}ℝ_{≥0} with v* := v + tu. We always continue with the v* that belongs to the smallest h resp. h_{k} reached so far. We solve the two-dimensional LPs via bisection method. We may basically extrapolate h und v in O(m + n). When we cannot leave v, we relax all constraints except v ≥ 0 a bit and undo that at the end of one iteration step in O(mn).

The diagonal of a cube with dimension O(max (m, n)) representing h, which has the target value 0, does not matter in the calculation. Since we process in each iteration step a fix amount of h, the claim follows by the strong duality theorem ([775], p. 60 - 65). The source code is not published here, since we can solve (average) LPs faster by the simplex- or the face-algorithm ([916], p. 580 f.).⃞

Corollary: Every linear system Ax = b for x ∈ ^{c}ℝ^{n} can be solved in O(mn), if we write it as LP min {h ∈ ^{c}ℝ_{≥0} : ±(Ax - b) ≤ (h, …, h)^{T} ∈ ^{c}ℝ^{m}} and if it is solvable.⃞

Corollary: An eigenvector x ∈ ^{c}ℝ^{n} \ {0} of the matrix A for the eigenvalue λ ∈ ^{c}ℝ can be determined in O(mn) with the LP min {h ∈ ^{c}ℝ_{≥0} : ±(Ax - λx) ≤ (h, …, h)^{T} ∈ ^{c}ℝ^{m}}, if this is solvable.⃞

Corollary: Every convex programme min {f_{1}(x) : x ∈ ^{c}ℝ^{n}, (f_{2}(x), …, f_{m}(x))^{T} ≤ 0} where the f_{i} ∈ ^{c}ℝ are convex functions is strongly polynomial and may be solved by the normal method and two-dimensional Newton's methods, which handle the f_{i}, in O(p), where p ∈ ^{c}ℕ* denotes the number of operands x_{j} of the f_{i}, assuming that it is solvable and that an x exists so that f_{i}(x) < 0 for all i > 1 (see [939], p. 589 ff.).⃞

Remarks: The normal method is numerically very stable, since the initial data are barely altered, and currently the fastest known worst-case LP-solving algorithm. It may also be applied to (mixed) integer problems and branch-and-bound, in particular for nonconvex optimisation. It can be better paralleled than the simplex method. It can easily be extended to convex programmes with a vector-valued objective function f_{1}.

Diameter theorem for polytopes: The diameter of an n-dimensional polytope defined by m constraints with m, n ∈ ^{c}ℕ* is at most max (2(m - n), 0).

Proof: Each vertex of a (potentially deformed) hypercube is formed by at most n hyperplanes. If we complete the polytope by adding or removing hyperplanes, the claim follows for the chosen path, since each step adds at most two additional edges. This theorem can be extended to polyhedra analogously by dropping the requirement of finiteness.⃞

Further thoughts: Gomory or Lenstra cuts can find an integer solution of the original problem in polynomial time if we additionally assume that A, b, and d are integers wlog and that m and n are fixed. By dualising, a full-dimensional LP may be obtained as described before within the normal method. This shows that the problem of (mixed) integer linear programming is not NP-complete:

Theorem: (Mixed) integer LPs may be solved in polynomial time.⃞

© 27.11.2017 by Boris Haase

• privacy policy • disclaimer • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • mwiki • top