In the following section, we solve linear programmes (LPs). The well-known simplex method is described, together with the maximally quadratic and furthermore strongly polynomial normal method as a solution of Smale's 9th problem. We may generalise the normal method to (non-) convex programmes (with vector-valued functions). The diameter theorem for polytopes is proven. It is shown that (mixed) integer LPs are polynomial.

Proof and algorithm: Let M := {x ∈ ^{κ}ℝ^{n} : Ax ≤ b, b ∈ ^{κ}ℝ^{m}, A ∈ ^{κ}ℝ^{m×n}, m, n ∈ ^{κ}ℕ*} be the feasible domain of the LP max {c^{T}x : c ∈ ^{κ}ℝ^{n}, x ∈ M}. By taking the dual or setting x := x^{+} - x^{-} with x^{+}, x^{-} ≥ 0, we obtain x ≥ 0. We first solve max {-z : Ax - ze ≤ b, z ∈ ^{κ}ℝ_{≥0}} with objective z = 0 and e = (1, ..., 1)^{T} ∈ ^{κ}ℝ^{m} to obtain a feasible x when b ≥ 0 does not hold. We begin with z := |min {b_{1}, ..., b_{m}}| and x := 0 as in the first case. Pivoting if necessary, we may assume that b ≥ 0.

Let i, j, k ∈ ^{κ}ℕ* and let a_{i}^{T} the i-th row vector of A. If c_{j} ≤ 0 for all j, the LP is solved. If for some c_{j} > 0, a_{ij} ≤ 0 for all i, the LP is positively unbounded. Otherwise, we divide all a_{i}^{T}x ≤ b_{i} by ||a_{i}|| and all c_{j} and a_{ij} by the minimum of |a_{ij}| such that a_{ij} ≠ 0 for each j. This will be reversed later. If necessary, renormalise by ||a_{i}||. This yields good runtime performance even on strongly deformed polytopes.

In each step, we can remove multiple constraints and such with a_{i} ≤ 0, since they are redundant (avoidable by adding an extra slack variable in each case). The second case is analogous. If in both cases b_{i} = 0 and a_{i} ≥ 0 for some i, then the LP has maximum 0 and solution x = 0 if b ≥ 0, otherwise it has no solutions. In each step, for each c_{j} > 0 and non-base variable x_{j}, we select the minimum ratio b_{k}/a_{kj} for a_{ij} > 0.

The variables with * are considered in the next step. The next potential vertex is given by x_{j}* = x_{j} + b_{k}/a_{kj} for feasible x*. To select the steepest edge, select the pivot a_{kj} corresponding to x_{j} that maximises c^{T}(x* - x)/||x* - x|| i.e. c_{j}^{2}/(1 + Σ a_{ij}^{2}) in the k-th constraint. If there are multiple maxima, select max c_{j}b_{k}/a_{kj} or alternatively the smallest angle min Σ c_{j}*/||c*||, according to the rule of best pivot value.

If there are more than n values b_{i} equal 0 and we cannot directly maximise the objective function, we relax (perturb) the constraints with b_{i} = 0 by the same, minimal modulus. These do not need to be written into the tableau: We simply set b_{i} = ||a_{i}||. If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier b_{i} by ||a_{i}||.

The cost of eliminating a multiple vertex, after which we revert the relaxation, corresponds to the cost of solving an LP with b = 0. The same task is potentially required at the end of the process when checking whether the LP admits any other solutions if at least two of the c_{j} are 0. Along the chosen path, the objective function increases (effectively) strictly monotonically. We can then simply calculate c_{j}*, a_{ij}* and b_{i}* using the rectangle rule.

In the worst-case scenario, the simplex method is not strongly polynomial despite the diameter formula for polytopes (see below) under any given set of pivoting rules, since an exponential "drift" can be constructed with Klee-Minty or Jeroslow polytopes, or others, creating a large deviation from the shortest path by forcing the selection of the least favourable edge. This is consistent with existing proofs. The result follows.⃞

Theorem: The normal method solves the LP in at most O(mn) and is strongly polynomial.

Proof and algorithm: The normal method includes the two phases of the simplex method and solves the LP if possible by moving in a direction calculated from one of O(n) orthogonal directions inside M towards the potential maximum. We solve every two-dimensional LP in O(m) by using the bisection method. The method begins identically to the simplex method, the reduction to one phase in O(mn) via the dual programme, too.

We define the *height* h_{1} := c^{T}x and wlog eliminate x_{1} with c_{1} > 0. If we cannot leave x, we set b_{i}* := b_{i} + d for all a_{i}^{T}x = b_{i} with a small d ∈ ^{κ}ℝ_{>0}. We begin each iteration step by solving the LPs max h_{j} in h_{j}, u ∈ ^{κ}ℝ_{≥0} for all j ≥ 1 with x* := (h_{j}, x_{2}, ..., x_{j-1}, u, x_{j+1}, ..., x_{n})^{T} and constant x_{k} for 1 ≠ k ≠ j. We then calculate a favourable direction r ∈ ^{κ}ℝ^{n} with r_{j} := x_{j}Δh_{j}/max Δh_{j}. We solve the LP max h_{1} in h_{1}, v ∈ ^{κ}ℝ_{≥0} with x* := (h_{1}, x_{2} + vr_{2}, ..., x_{n} + vr_{n})^{T}.

We always continue with the x* belonging to the greatest value h_{j} reached so far for j ≥ 1. We successively define x_{j}* := (max x_{j} + min x_{j})/2 for all j > 1, until we have changed these x_{j}* sufficiently, if possible. Then we set b_{i}* := b_{i} again. We generally may extrapolate from preceding results in O(m + n). Thus, we have finished one iteration step in O(mn). Since we process all h_{j} independently from m and n in O(1), the claim follows.⃞

Remarks: We terminate the procedure if we cannot increase h_{1} anymore or if the LP appears to be unlimited, since we reach a ceiling for h_{1}. If b ≥ 0 does not hold, we first solve the LP max h_{1} subject to Ax + q ≤ b and x ≥ 0 with q ∈ ^{κ}ℝ^{m}, z := |min {b_{1}, ..., b_{m}}|, together with q_{i} := h_{1} - z for b_{i} < 0 and q_{i} := 0 otherwise. The initial value is h_{1} = 0 and the target value is h_{1} = z. The dual programme to max {c^{T}x : c ∈ ^{κ}ℝ^{n}, x ∈ M, x ≥ 0} reads min {b^{T}y : y ∈ ^{κ}ℝ^{m}, y ≥ 0, A^{T}y ≥ c}.

Corollary: Every linear system (LS) of equations Ax = b with x ∈ ^{κ}ℝ^{n} may be solved in at most O(mn), provided that a solution exists.

Proof: We write Ax = b as Ax ≤ b and -Ax ≤ -b where x = x^{+} - x^{-} with x^{+}, x^{-} ≥ 0.⃞

Theorem: Every convex programme min {f_{1}(x) : x ∈ ^{κ}ℝ^{n}, x ≥ 0, (f_{2}(x), …, f_{m}(x))^{T} ≤ 0} where the f_{i} ∈ ^{κ}ℝ are convex posynomials is strongly polynomial and may be solved by the normal method and Newton's method, which handles the f_{i}, in at most O(p), assuming that it is solvable, where p ∈ ^{κ}ℕ* denotes the number of operands x_{j} of the f_{i} and the objective function f_{1} is linearised.

Proof: The claim follows from the existence of the normal method.⃞

Remarks: The normal method is currently the fastest known LS/LP-solving algorithm and is numerically very stable, since the initial data are barely altered. It can also be applied to branch and bound, in particular for nonconvex optimisation. The normal method is a better candidate for parallelisation than the simplex method. It can easily be extended to convex programmes with vector-valued or other convex f_{i}.

Diameter theorem for polytopes: The diameter of a n-dimensional polytope defined by m constraints with m, n ∈ ^{κ}ℕ* is at most max (2(m - n), 0).

Proof: Each vertex of a (potentially deformed) hypercube is formed by at most n hyperplanes. If we complete the polytope by adding or removing hyperplanes, the claim follows for the chosen path, since each step adds at most two additional edges. This theorem can be extended to polyhedra analogously by dropping the requirement of finiteness.⃞

Further thoughts: Gomory or Lenstra cuts can find an integer solution of the original problem in polynomial time if we additionally assume that a, b, and c are integers wlog and that m and n are fixed. By finding any redundant constraints and hyperplanes, a full-dimensional LP may be obtained. This shows that the problem of (mixed) integer linear programming is not NP-complete:

Theorem: (Mixed) integer LPs may be solved in polynomial time.⃞

code of simplex method and data file example

© 04.10.2016 by Boris Haase

• disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top