# Homepage of Boris Haase

## #55: Improvement Linear Programming on 23.05.2020

In the following section, Set Theory, Topology and Nonstandard Analysis are presupposed. The exponential simplex and the polynomial intex method (inter-/extrapolation) solve linear programmes (LPs).

Diameter theorem for polytopes: The diameter of an $$n$$-dimensional polytope defined by $$m$$ constraints for $$m, n \in {}^{\omega}\mathbb{N}_{\ge 2}$$ is at most $$2(m + n - 3)$$.

Proof: At most $$m - 1$$ hyperplanes can be assembled into an incomplete cycle of dimension 2 and there are at most $$n - 2$$ alternatives sidewards in the remaining dimensions. Overcoming every minimal distance requires at most two edges and yields the factor 2.$$\square$$

Remark: Dropping the requirement of finiteness, the theorem can be extended to polyhedra analogously.

Definition: Let $$\vartheta := {_e}\omega \omega$$ and $${|| \cdot ||}_{1}$$ the 1-norm. Every variable $$x \in X \subseteq {}^{\omega}\mathbb{R}^{\omega}$$ passed to the next step is followed by $${}^{*}$$ where $$\Delta x := {x}^{*} - x$$. The unit vector of $$x$$ is $${_1}{x} := x/||x||$$, where $$_1{0}$$ is undefined. Let $$x, y \in X$$ in $$(x, y, ...)^T$$ be row vectors. If a method requires computation time in seconds and memory in bits of $$\mathcal{O}({\omega}^{\mathcal{O}(1)})$$, it is polynomial. If one of both quantities is $$\mathcal{O}({e}^{|\mathcal{O}(\omega)|})$$, the method is exponential. Let the eigenproduct (yet: determinant) of a square matrix be the product of its eigenvalues.$$\triangle$$

Theorem: The simplex method is exponential.

Proof and algorithm: Let $$P := \{x \in {}^{\omega}\mathbb{R}^{n} : Ax \le b, b \in {}^{\omega}\mathbb{R}^{m}, A \in {}^{\omega}\mathbb{R}^{m \times n}, m, n \in {}^{\omega}\mathbb{N}^{*}\}$$ be the feasible domain of the LP max $$\{{c}^{T}x : c \in {}^{\omega}\mathbb{R}^{n}, x \in P\}$$. The dual of the latter achieves $${x}^{*} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}$$. Setting $$x := {x}^{+} - {x}^{-}$$ with $${x}^{+}, {x}^{-} \ge 0$$, $${x}^{*} \in {}^{\omega}\mathbb{R}_{\ge 0}^{2n}$$ is attained. Solving max $$\{-z : Ax - b \le {(z, ..., z)}^{T} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}\}$$ yields a feasible $$x$$ when $$b \ge 0$$ does not hold. Let $$|\text{min } \{{b}_{1}, ..., {b}_{m}\}|$$ be the initial value for $$z$$, 0 its target value and let the starting point $$x := 0$$.

A suitable pivot step causes that $${b}^{*} \ge 0$$. Let $$i, j, k \in {}^{\omega}\mathbb{N}^{*}$$ and let $${a}_{i}^{T}$$ the $$i$$-th row vector of $$A$$. If $${c}_{j} \le 0$$ for all $$j$$, the LP is solved. If for some $${c}_{j} > 0$$ also $${a}_{ij} \le 0$$ for all $$i$$, the LP is positively unbounded. If for some $${c}_{j} = 0$$ also $${a}_{ij} \le 0$$ for all $$i$$, drop $${c}_{j}$$ and $${A}_{.j}$$ as well as $${b}_{i}$$ and $${a}_{i}$$, but only when $${a}_{ij} < 0$$ holds. The inequality $${a}_{ij}{x}_{j} \ge 0 > {b}_{i}$$ for all $$j$$ has no solution, too. If necessary, divide all $${a}_{i}^{T}x \le {b}_{i}$$ by $$||{a}_{i}||$$ and all $${c}_{j}$$ and $${a}_{ij}$$ by the minimum of $$|{a}_{ij}|$$ such that $${a}_{ij} \ne 0$$ for each $$j$$. This will be reversed later.

If necessary, renormalise by $$||{a}_{i}||$$. Redundant constraints (with $${a}_{i} \le 0$$) may always be removed. Select for each $${c}_{j} > 0$$ and non-base variable $${x}_{j}$$ the minimum ratio $${b}_{k}/{a}_{kj}$$ for $${a}_{ij} > 0$$. The next potential vertex is given by $${x}_{j}^{*} = {x}_{j} + {b}_{k}/{a}_{kj}$$ for feasible $${x}^{*}$$. To select the steepest edge, pick the pivot $${a}_{kj}$$ corresponding to $${x}_{j}$$ that maximises $${c}^{T}{_1}{\Delta x}$$ or $${c}_{j}^{2}/(1 + {||{A}_{.j}||}^{2})$$ in the $$k$$-th constraint.

Multiple maxima allow to use the rule of best pivot value max$${}_{k,j} {c}_{j}{b}_{k}/{a}_{kj}$$ or (slower) the smallest angle min $${{_{(1)}}(1, ..., 1)}^{T}{_1}{c}^{*}$$. If the objective function cannot directly be maximised, perturb, which means relax, the constraints with $${b}_{i} = 0$$ by the same, minimal modulus, which need not to be written into the tableau by setting $${b}_{i} := ||{a}_{i}||$$.

If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier $${b}_{i}$$ by $$||{a}_{i}||$$. Leaving a multiple vertex, after which the relaxation is reverted, may require to solve an LP with $$c > 0$$ and $$b = 0$$. Along the chosen path, the objective function increases otherwise strictly monotonically. Eventually, $${c}_{j}^{*}, {a}_{ij}^{*}$$ and $${b}_{i}^{*}$$ can be simply computed using the rectangle rule (cf. , p. 63).

Despite the diameter theorem for polytopes, the simplex method is not polynomial under any given set of pivoting rules in the worst-case scenario, since an exponential "drift" can be constructed (for e.g. Klee-Minty or Jeroslow polytopes), creating a large deviation from the shortest path by forcing the selection of an unfavourable edge for every step. The result follows in accordance with the state of research.$$\square$$

Theorem: The intex method solves every solvable LP in $$\mathcal{O}({\vartheta}^{3})$$.

Proof and algorithm: First, normalise and scale $${b}^{T}y - {c}^{T}x \le 0, Ax \le b$$ as well as $${A}^{T}y \ge c$$. Let the height $$h$$ have the initial value $$h_0 := s |\min \; \{b_1, ..., b_m, -d_1, ..., -d_n\}|$$ for the elongation factor $$s \in \, ]1, 2]$$. The LP min $$\{h \in [0, h_0] : x \in {}^{\omega}\mathbb{R}_{\ge 0}^{n}, y \in {}^{\omega}\mathbb{R}_{\ge 0}^{m},{b}^{T}y - {c}^{T}x \le h, Ax - b \le (h, ..., h)^T \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}, c - {A}^{T}y \le (h, ..., h)^T \in {}^{\omega}\mathbb{R}_{\ge 0}^{n}\}$$ has $$k$$ constraints and the feasible starting point $$(x_0, y_0, h_0/s)^{T} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m+n+1}$$, e.g. $$(0, 0, h_0/s)^{T}$$. It identifies the mutually dual LPs max $$\{{c}^{T}x : c \in {}^{\omega}\mathbb{R}^{n}, x \in {P}_{\ge 0}\}$$ and min $$\{{b}^{T}y : y \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}, {A}^{T}y \ge c\}$$.

Let the point $$p := (x, y, h)^T$$ approximate the subpolytopes centre of gravity $$P^*$$ as $$p_k^* := (\min p_k + \max p_k)/2$$ until $${|| \Delta p ||}_{1}$$ is sufficiently small. Here $$x$$ takes precedence over $$y$$. Then extrapolate $$p$$ via $${p}^{*}$$ into $$\partial P^*$$ as $$u$$. Put $$p := p^* + (u - p^*)/s$$ to shun $$\partial P^*$$. Hereon approximate $$p$$ more deeply again as centre of gravity. After optionally solving all LPs min$${}_{k} {h}_{k}$$ by bisection methods for $${h}_{k} \in {}^{\omega}\mathbb{R}_{\ge 0}$$ in $$\mathcal{O}({\vartheta}^{2})$$ each time, $$v \in {}^{\omega}\mathbb{R}^{k}$$ may be determined such that $$v_k := \Delta{p}_{k} \Delta{h}_{k}/r$$ and $$r :=$$ min$${}_{k} \Delta{h}_{k}$$. Simplified let $$|\Delta{p}_{1}| = ... = |\Delta{p}_{m+n}|$$.

Here min $${h}_{m+n+1}$$ may be solved for $$p^* := p + tv$$ where $$t \in {}^{\omega}\mathbb{R}_{\ge 0}$$ and $${v}_{m+n+1} = 0$$. If min$${}_{k} {h}_{k} r = 0$$ follows, end, otherwise start over until min $$h = 0$$ or min $$h > 0$$ is certain. If necessary, relax the constraints temporarily by the same small modulus. Since almost every iteration step in $$\mathcal{O}({\omega\vartheta}^{2})$$ halves $$h$$ at least, the strong duality theorem (, p. 60 - 65) yields the result.$$\square$$

Conclusion: If neither a primally feasible $$x$$ nor a dual solution $$y$$ needs to be computed, the runtime of the LP max $$\{{c}^{T}x : c \in {}^{\omega}\mathbb{R}^{n}, x \in {P}_{\ge 0}\}$$ can roughly be halved by setting $$h := {c}^{T}x.\square$$

Remarks: Simplex method and face algorithm (, p. 580 f.) may solve the LP faster for small $$m$$ and $$n$$. The current stock of constraints or variables can easily be changed, because the intex method is a non-transforming method, faster than all known (worst-case) LP-solving algorithms in $$\mathcal{O}({{_e}\omega \omega}^{19/6})$$. Details are first published when no misuse for non-transparent and bad decisions must be feared.

Corollary: The LP max $$\{{||x - {x}^{o}||}_{1} : {c}^{T}x = {c}^{T}{x}^{o}, Ax \le b, x - {x}^{o} \in {[-1, 1]}^{n}, x \in {}^{\omega}\mathbb{R}_{\ge 0}^{n}\}$$ can determine for the first solution $${x}^{o}$$ a second one in $$\mathcal{O}({\omega\vartheta}^{2})$$ if any, where $${y}^{o}$$ may be treated analogously.$$\square$$

Corollary: The eigenvalues $$\lambda \in {}^{\omega}\mathbb{R}$$ of the matrix $$A \in {}^{\omega}\mathbb{R}^{n \times n}$$ and their eigenvectors $$x \ne 0$$ are w.l.o.g. just the in $$\mathcal{O}({\omega\vartheta}^{2})$$ determinable solutions to the LPs max $$\{\lambda \in [0, h_0]: Ax = \pm\lambda x, x \in {[-1, 1]}^{n} \setminus \{0\}\}.\square$$

Corollary: The LP min $$\{h \in [0, s \, \text{max } \{|{b}_{1}|, ..., |{b}_{m}|\}] : \pm(Ax - b) \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}\}$$ can determine an $$x \in {}^{\omega}\mathbb{R}^{n}$$ of every solvable linear system (LS) $$Ax = b$$ in $$\mathcal{O}({\vartheta}^{3})$$. The LPs max $$\{{x}_{j} : Ax = 0\}$$ yield all solutions to the LS. The matrix $$A$$ is regular if and only if the LP max $$\{{||x||}_{1} : Ax = 0\} = 0.\square$$

Corollary: Let $${\alpha }_{j} := {A}_{.j}^{-1}$$ for $$j = 1, ..., n$$ concerning the matrix $${A}^{-1} \in {}^{\omega}\mathbb{R}^{n \times n}$$ and let $${\delta}_{ij}$$ the Kronecker delta. Regular $$A$$ have eigenproduct $$\ne 0$$ and allow every LS $${A \alpha }_{j} = {({\delta}_{1j}, ..., {\delta}_{nj})}^{T}$$ to be solved in $$\mathcal{O}({\vartheta}^{3}).\square$$

Corollary: If $$q \in [0, 1]$$ is the density of $$A$$ and when only finite real numbers are used, $$\mathcal{O}({\vartheta}^{3})$$ and $$\mathcal{O}({\omega\vartheta}^{2})$$ can above be everywhere replaced by max $$\{\mathcal{O}(qmn), \mathcal{O}(m + n)\}$$ or max $$\{\mathcal{O}(q{n}^{2}), \mathcal{O}(n)\}.\square$$

Remarks: Remarks: These five corollaries can be easily transferred to complex ones. By resorting to the initial data, the intex method is numerically very stable. Rounding errors can be kept small by using a modified Kahan-Babuška-Neumaier summation. If it is optimised for distributed computing in $${}^{\nu}\mathbb{R}^{\nu}$$, its runtime only amounts to $$\mathcal{O}(1)$$. It is also well-suited for (mixed) integer problems and (non-) convex (Pareto) optimisation (according to nature as in ).

Corollary: Every solvable convex programme min $$\{{f}_{1}(x) : x \in {}^{\omega}\mathbb{R}^{n}, {({f}_{2}(x), ..., {f}_{m}(x))}^{T} \le 0\}$$ where the $${f}_{i} \in {}^{\omega}\mathbb{R}$$ are convex functions for $$i = 1, ..., m$$ may be solved by the intex method and two-dimensional bisection or Newton's methods in polynomial runtime, if the number of operands $${x}_{j}$$ of the $${f}_{i}$$ is $$\le {\omega}^{\nu-3}$$ and if an $$x$$ exists so that $${f}_{i}(x) < 0$$ for all $$i > 1$$ (cf. , p. 589 ff.).$$\square$$

code of simplex method