Homepage of Boris Haase




Previous | Next

#51: Improvement Linear Programming on 11.02.2019

In the following section, Set Theory and Nonstandard Analysis are presupposed. The exponential simplex and the polynomial intex method (inter-/extrapolation) solve linear programmes (LPs).

Diameter theorem for polytopes: The diameter of an \(n\)-dimensional polytope defined by \(m\) constraints for \(m, n \in {}^{\omega}\mathbb{N}_{\ge 2}\) is at most \(2(m + n - 3)\).

Proof: At most \(m - 1\) hyperplanes can be assembled into an incomplete cycle of dimension 2 and there are at most \(n - 2\) alternatives sidewards in the remaining dimensions. Overcoming every minimal distance requires at most two edges and yields the factor 2.\(\square\)

Remark: Dropping the requirement of finiteness, the theorem can be extended to polyhedra analogously.

Definition: Let \(\vartheta := {_e}\omega \omega\) and \({|| \cdot ||}_{1}\) the 1-norm. Every variable \(x \in X \subseteq {}^{\omega}\mathbb{R}^{\omega}\) passed to the next step is followed by \({}^{*}\) where \(\Delta x := {x}^{*} - x\). The unit vector of \(x\) is \({_1}{x} := x/||x||\), where \(_1{0}\) is undefined. Let \(x, y \in X \) in \((x, y, ...)^T\) be row vectors. If a method requires computation time in seconds and memory in bits of \(\mathcal{O}({\omega}^{\mathcal{O}(1)})\), it is polynomial. If one of both quantities is \(\mathcal{O}({e}^{|\mathcal{O}(\omega)|})\), the method is exponential. Let the eigenproduct (yet: determinant) of a square matrix be the product of its eigenvalues.\(\triangle\)

Theorem: The simplex method is exponential.

Proof and algorithm: Let \(P := \{x \in {}^{\omega}\mathbb{R}^{n} : Ax \le b, b \in {}^{\omega}\mathbb{R}^{m}, A \in {}^{\omega}\mathbb{R}^{m \times n}, m, n \in {}^{\omega}\mathbb{N}^{*}\}\) be the feasible domain of the LP max \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in P\}\). The dual of the latter achieves \({x}^{*} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}\). Setting \(x := {x}^{+} - {x}^{-}\) with \({x}^{+}, {x}^{-} \ge 0\), \({x}^{*} \in {}^{\omega}\mathbb{R}_{\ge 0}^{2n}\) is attained. Solving max \(\{-z : Ax - b \le {(z, ..., z)}^{T} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}\}\) yields a feasible \(x\) when \(b \ge 0\) does not hold. Let \(|\text{min } \{{b}_{1}, ..., {b}_{m}\}|\) be the initial value for \(z\), 0 its target value and let the starting point \(x := 0\). A suitable pivot step causes that \({b}^{*} \ge 0\).

Let \(i, j, k \in {}^{\omega}\mathbb{N}^{*}\) and let \({a}_{i}^{T}\) the \(i\)-th row vector of \(A\). If \({d}_{j} \le 0\) for all \(j\), the LP is solved. If for some \({d}_{j} > 0\) also \({a}_{ij} \le 0\) for all \(i\), the LP is positively unbounded. If for some \({d}_{j} = 0\) also \({a}_{ij} \le 0\) for all \(i\), drop \({d}_{j}\) and \({A}_{.j}\) as well as \({b}_{i}\) and \({a}_{i}\), but only when \({a}_{ij} < 0\) holds. The inequality \({a}_{ij}{x}_{j} \ge 0 > {b}_{i}\) for all \(j\) has no solution, too. If necessary, divide all \({a}_{i}^{T}x \le {b}_{i}\) by \(||{a}_{i}||\) and all \({d}_{j}\) and \({a}_{ij}\) by the minimum of \(|{a}_{ij}|\) such that \({a}_{ij} \ne 0\) for each \(j\). This will be reversed later.

If necessary, renormalise by \(||{a}_{i}||\). Redundant constraints (with \({a}_{i} \le 0\)) may always be removed. Select for each \({d}_{j} > 0\) and non-base variable \({x}_{j}\) the minimum ratio \({b}_{k}/{a}_{kj}\) for \({a}_{ij} > 0\). The next potential vertex is given by \({x}_{j}^{*} = {x}_{j} + {b}_{k}/{a}_{kj}\) for feasible \({x}^{*}\). To select the steepest edge, pick the pivot \({a}_{kj}\) corresponding to \({x}_{j}\) that maximises \({d}^{T}{_1}{\Delta x}\) or \({d}_{j}^{2}/(1 + {||{A}_{.j}||}^{2})\) in the \(k\)-th constraint.

If there are multiple maxima, select max\({}_{k,j} {d}_{j}{b}_{k}/{a}_{kj}\) according to the rule of best pivot value or, maybe less successful, the smallest angle min \({{_{(1)}}(1, ..., 1)}^{T}{_1}{d}^{*}\). If the objective function cannot directly be maximised, perturb, which means relax, the constraints with \({b}_{i} = 0\) by the same, minimal modulus. These do not need to be written into the tableau: Simply set \({b}_{i} = ||{a}_{i}||\).

If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier \({b}_{i}\) by \(||{a}_{i}||\). Leaving a multiple vertex, after which the relaxation is reverted, may require to solve an LP with \(d > 0\) and \(b = 0\). Along the chosen path, the objective function increases otherwise strictly monotonically. Eventually, \({d}_{j}^{*}, {a}_{ij}^{*}\) and \({b}_{i}^{*}\) can be simply computed using the rectangle rule (cf. [775], p. 63).

Despite the diameter theorem for polytopes, the simplex method is not polynomial under any given set of pivoting rules in the worst-case scenario, since an exponential "drift" can be constructed (for e.g. Klee-Minty or Jeroslow polytopes), creating a large deviation from the shortest path by forcing the selection of an unfavourable edge for every step. The result follows in accordance with the state of research.\(\square\)

Theorem: The intex method solves every solvable LP in \(\mathcal{O}({\vartheta}^{3})\).

Proof and algorithm: First, \({b}^{T}y - {d}^{T}x \le 0, Ax \le b\) and \({A}^{T}y \ge d\) are normalised and scaled. Let the height \(h\) have the initial value \({h}_{0} := |\text{min } \{{b}_{1}, ..., {b}_{m}, {-d}_{1}, ..., {-d}_{n}\}|/r\) for the reduction factor \(r \in \; [½, 1[\). Let the LP min \(\{h \in [0, {h}_{0}] : x \in {}^{\omega}\mathbb{R}_{\ge 0}^{n}, y \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}, {b}^{T}y - {d}^{T}x \le h, Ax - b \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}, d - {A}^{T}y \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}_{\ge 0}^{n}\}\) have the feasible interior starting point \(v := (x, y, h)^{T} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m+n+1}\), e.g. \((0, 0, h_0)^{T}\). It identifies the mutually dual LPs \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in {P}_{\ge 0}\}\) and min \(\{{b}^{T}y : y \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}, {A}^{T}y \ge d\}\).

Then the centre of gravity of the subpolytope below \(h\) is approximated by determining the interpolated coordinates for \(p_i^2 := \hat{4}{\sum_{j} a_{ij}^2}/(\sum_{j} a_{ij})^2\) and \(q_j^2 := \hat{4}{\sum_{i} a_{ij}^2}/(\sum_{i} a_{ij})^2\) such that \(a_{ij} > 0\) as \(x_j^* := p_i \max x_j - \acute{p_i} \min x_j\) and \(y_i^* := q_j \max y_i - \acute{q_j} \min y_i\). Here \(x\) takes precedence over \(y\) and the restriction \({b}^{T}y - {d}^{T}x \le h\) is included analogously, until all \(|\Delta{v}_{k}|\) are sufficiently small.

After that, extrapolate \(v\) via \({v}^{*}\) into the boundary of the subpolytope. The \(r\)-fold of the distance exceeding \({v}^{*}\) determines the new starting point \(v\). If min\({}_{k} {h}_{k} t = 0\) follows from \(t :=\) min\({}_{k} \Delta{h}_{k}\), end, otherwise start over until min \(h = 0\) or min \(h > 0\) is certain. Since \(h\) at least halves itself for each iteration step in \(\mathcal{O}({\omega\vartheta}^{2})\), the strong duality theorem ([775], p. 60 - 65) yields the result.\(\square\)

Conclusion: If neither a primally feasible \(x\) nor a dual solution \(y\) needs to be computed, the runtime of the LP max \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in {P}_{\ge 0}\}\) can roughly be halved by setting \(h := {d}^{T}x.\square\)

Remarks: Simplex method and face algorithm ([916], p. 580 f.) may solve the LP faster for small \(m\) and \(n\). The current stock of constraints or variables can easily be changed, because the intex method is a non-transforming method, faster than all known (worst-case) LP-solving algorithms in \(\mathcal{O}({{_e}\omega \omega}^{19/6})\). Details are first published when no misuse for non-transparent and bad decisions must be feared.

Corollary: The LP max \(\{{||x - {x}^{o}||}_{1} : {d}^{T}x = {d}^{T}{x}^{o}, Ax \le b, x - {x}^{o} \in {[-1, 1]}^{n}, x \in {}^{\omega}\mathbb{R}_{\ge 0}^{n}\}\) can determine for the first solution \({x}^{o}\) a second one in \(\mathcal{O}({\omega\vartheta}^{2})\) if any, where \({y}^{o}\) may be treated analogously.\(\square\)

Corollary: The eigenvalues \(\lambda \in {}^{\omega}\mathbb{R}\) of the matrix \(A \in {}^{\omega}\mathbb{R}^{n \times n}\) and the eigenvectors \(x \ne 0\) are w.l.o.g. just the in \(\mathcal{O}({\omega\vartheta}^{2})\) determinable solutions to the LPs max \(\{\lambda + {||x||}_{1} : Ax = \lambda x, x \in {[-1, 1]}^{n}, \lambda \ge 0\}.\square\)

Corollary: The LP min \(\{h \in [0, 1 + \text{max } \{|{b}_{1}|, ..., |{b}_{m}|\}] : \pm(Ax - b) \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}_{\ge 0}^{m}\}\) can determine an \(x \in {}^{\omega}\mathbb{R}^{n}\) of every solvable linear system (LS) \(Ax = b\) in \(\mathcal{O}({\vartheta}^{3})\). The LPs max \(\{{x}_{j} : Ax = 0\}\) yield all solutions to the LS. The matrix \(A\) is regular if and only if the LP max \(\{{||x||}_{1} : Ax = 0\} = 0.\square\)

Corollary: Let \({\alpha }_{j} := {A}_{.j}^{-1}\) for \(j = 1, ..., n\) concerning the matrix \({A}^{-1} \in {}^{\omega}\mathbb{R}^{n \times n}\) and let \({\delta}_{ij}\) the Kronecker delta. Regular \(A\) have eigenproduct \(\ne 0\) and allow every LS \({A \alpha }_{j} = {({\delta}_{1j}, ..., {\delta}_{nj})}^{T}\) to be solved in \(\mathcal{O}({\vartheta}^{3}).\square\)

Corollary: If \(s \in [0, 1]\) is the density of \(A\) and when only finite real numbers are used, \(\mathcal{O}({\vartheta}^{3})\) and \(\mathcal{O}({\omega\vartheta}^{2})\) can above be everywhere replaced by max \(\{\mathcal{O}(smn), \mathcal{O}(m + n)\}\) or max \(\{\mathcal{O}(s{n}^{2}), \mathcal{O}(n)\}.\square\)

Remarks: These five corollaries can be easily transferred to complex ones. The intex method is numerically very stable, since rounding errors can be kept small also by resorting to the initial data and a Kahan-Babuška-Neumaier summation modified according to Klein, especially near the end. If it is optimised for distributed computing in \({}^{c}\mathbb{R}^{c}\), its runtime only amounts to \(\mathcal{O}(1)\). It is also well-suited for (mixed) integer problems and (non-)convex (Pareto) optimisation.

Corollary: Every solvable convex programme min \(\{{f}_{1}(x) : x \in {}^{\omega}\mathbb{R}^{n}, {({f}_{2}(x), ..., {f}_{m}(x))}^{T} \le 0\}\) where the \({f}_{i} \in {}^{\omega}\mathbb{R}\) are convex functions for \(i = 1, ..., m\) may be solved by the intex method and two-dimensional bisection or Newton's methods in polynomial runtime, if the number of operands \({x}_{j}\) of the \({f}_{i}\) is \(\le {\omega}^{c-3}\) and if an \(x\) exists so that \({f}_{i}(x) < 0\) for all \(i > 1\) (cf. [939], p. 589 ff.).\(\square\)

code of simplex method

© 11.02.2019 by Boris Haase


Valid XHTML 1.0 • privacy policy • disclaimer • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • mwiki • top