## Linear Programming

The following section discusses the solution of linear programmes (LPs). The well-known simplex method and its failure to be strongly polynomial in worst-case scenarios are presented, as well as a method that rapidly eliminates problematic multiple vertices. The maximally quadratic and furthermore strongly polynomial normal method is then compared to the simplex method as a solution of Smale's 9th problem.

Theorem: The simplex method is not strongly polynomial.

Proof and algorithm: Let M := {x ∈ κn : Ax ≤ b, b ∈ κm, A ∈ κm×n, m, n ∈ κℕ*} be the feasible domain of the LP max {cTx : c ∈ κn, x ∈ M}. By taking the dual or setting x := x+ - x- with x+, x- ≥ 0, we obtain x ≥ 0. We first solve max {-z : Ax - ze ≤ b, z ∈ κ≥0} with objective z = 0 and e = (1, ..., 1)Tκm to obtain a feasible x when b ≥ 0 does not hold. We begin with z := |min {b1, ..., bm}| and x := 0 as in the first case. Pivoting if necessary, we may assume that b ≥ 0.

Let i, j, k ∈ κℕ* and let aiT the i-th row vector of A. If cj ≤ 0 for all j, the LP is solved. If for some cj > 0, aij ≤ 0 for all i, the LP is positively unbounded. Otherwise, we divide all aiTx ≤ bi by ||ai|| and all cj and aij by the minimum of |aij| such that aij ≠ 0 for each j. This will be reversed later. If necessary, renormalise by ||ai||. This yields good runtime performance even on strongly deformed polytopes.

In each step, we can remove multiple constraints and such with ai ≤ 0, since they are redundant (avoidable by adding an extra slack variable in each case). The second case is analogous. If in both cases bi = 0 and ai ≥ 0 for some i, then the LP has maximum 0 and solution x = 0 if b ≥ 0, otherwise it has no solutions. In each step, for each cj > 0 and non-base variable xj, we select the minimum ratio bk/akj for aij > 0.

The variables with * are considered in the next step. The next potential vertex is given by xj* = xj + bk/akj for feasible x*. To select the steepest edge, select the pivot akj corresponding to xj that maximises cT(x* - x)/||x* - x|| i.e. cj|cj|/(1 + Σ aij2) in the k-th constraint. If there are multiple maxima, select max cjbk/akj or alternatively the smallest angle min Σ cj*/||c*||, according to the rule of best pivot value.

If there are more than n values bi equal 0 and we cannot directly maximise the objective function, we relax (perturb) the constraints with bi = 0 by the same, minimal factor. These do not need to be written into the tableau: We simply set bi = ||ai||. If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier bi by ||ai||.

The cost of eliminating a multiple vertex, after which we revert the relaxation, corresponds to the cost of solving an LP with b = 0. The same task is potentially required at the end of the process when checking whether the LP admits any other solutions if at least two of the cj are 0. Along the chosen path, the objective function increases (effectively) strictly monotonically. We can then simply calculate cj*, aij* and bi* using the rectangle rule.

In the worst-case scenario, the simplex method is not strongly polynomial despite the diameter formula for polytopes (see below) under any given set of pivoting rules, since an exponential "drift" can be constructed with Klee-Minty or Jeroslow polytopes, or others, creating a large deviation from the shortest path by forcing the selection of the least favourable edge. This is consistent with existing proofs. The result follows.⃞

Theorem: The normal method solves the LP in at most O(mn) and is strongly polynomial.

Proof and algorithm: The normal method includes the two phases of the simplex method and solves the LP if possible by moving in a direction calculated from one of O(n) orthogonal directions inside M as the next key step towards the potential maximum. We solve all two-dimensional LPs in O(m) by using the bisection method. The method begins and multiple vertices are handled identically to the simplex method.

We define the height h := cTx and wlog eliminate x1 with c1 > 0. We first execute a conservative sub-procedure to check for a maximum on the boundary of the dome, followed by a progressive sub-procedure to check for a maximum on the centre of the dome. This conservative sub-procedure is then repeated, etc., until the maximum is found. Thus, we begin by solving the LPs max hj in hj, u ∈ κ≥0 for all j > 1 with x* := (hj, x2, ..., xj-1, u, xj+1, ..., xn)T and constant xk for 1 ≠ k ≠ j.

The xj* > 0 may need to be scaled down to the same minimal value. We then iteratively define xj := (max xj + min xj)/2 until all xj have been significantly changed. Otherwise, we minimally relax the constraints satisfied by x with equality, and similarly compute a favourable direction r ∈ κn away from x. We once again solve the LPs listed above and continue the procedure.

We then calculate a favourable direction r ∈ κn with rj := xjhj/max hj. We solve the LP max h in ∈ κ≥0 with x* := x + (v, 0, ..., 0)T + wr ∈ M. If a predefined upper limit for h is exceeded, the problem is unbounded, and we terminate the procedure. Since each key step of complexity O(mn) constantly processes h independently from m and n, there are constantly many of them. The claim follows, provided that the LP is solvable.⃞

Corollary: Every linear system (LS) of equations Ax = b with x ∈ κn may be solved in at most O(mn), provided that a solution exists.

Proof: We write Ax = b as Ax ≤ b and -Ax ≤ -b where x = x+ - x- with x+, x- ≥ 0.⃞

Theorem: Every convex programme min {f1(x) : x ∈ κn, x ≥ 0, (f2(x), …, fm(x))T ≤ 0} where the fiκℝ are convex posynomials is strongly polynomial and may be solved by the normal method and Newton's method, which handles the fi, in at most O(p), assuming that it is solvable, where p ∈ κℕ* denotes the number of operands xj of the fi and the objective function f1 is linearised.

Proof: The claim follows from the existence of the normal method.⃞

Remarks: The normal method is currently the fastest known LS/LP-solving algorithm and is numerically very stable, since the initial data are barely altered. It can also be applied to branch and bound, in particular for nonconvex optimisation. The normal method is a better candidate for parallelisation than the simplex method. It can easily be extended to convex programmes with vector-valued or other convex fi.

Diameter theorem for polytopes: The diameter of a d-dimensional polytope defined by n constraints with d, n ∈ κℕ* is at most max(2(n - d), 0).

Proof: Each vertex of a (potentially deformed) hypercube is formed by at most d hyperplanes. If we complete the polytope by adding or removing hyperplanes, the claim follows for the chosen path, since each step adds at most two additional edges. This theorem can be extended to polyhedra analogously by dropping the requirement of finiteness.⃞

Further thoughts: Gomory or Lenstra cuts can find an integer solution of the original problem in polynomial time if we additionally assume that a, b, and c are integers wlog and that m and n are fixed. By finding any redundant constraints and hyperplanes, a full-dimensional LP may be obtained. This shows that the problem of (mixed) integer linear programming is not NP-complete:

Theorem: (Mixed) integer LPs may be solved in polynomial time.⃞