Homepage of Boris Haase




Euclidean Geometry • Equitable Distributing • Linear Programming • Set Theory • Nonstandard Analysis • Advice • Topology • Number Theory • Computation of Time  (Previous | Next)



Linear Programming

Linear Programming

In the following section, we presuppose Set Theory and Nonstandard Analysis. We solve linear programmes (LPs) by the exponential simplex and the polynomial intex method (inter-/extrapolation).

Diameter theorem for polytopes: The diameter of an \(n\)-dimensional polytope defined by \(m\) constraints with \(m, n \in {}^{\omega}\mathbb{N}_{\ge2}\) is at most \(2(m + n - 3)\).

Proof: We can assemble at most \(\acute{m}\) hyperplanes into an incomplete cycle (of dimension 2) and have to consider \(n - 2\) alternatives sidewards (in the remaining dimensions). Since we can pass each section with a maximum of two edges, the factor is 2. This theorem can be extended to polyhedra analogously by dropping the requirement of finiteness.\(\square\)

Definition: Let \(\omega\) be the limit, to which all variables in Landau notation tend, and \(\vartheta := \ell \omega\). A method is polynomial (exponential) if its computation time in seconds and (or) its memory consumption in bits is \(\mathcal{O}({\omega}^{\mathcal{O}(1)}) (\mathcal{O}({e}^{|\mathcal{O}(\omega)|}))\).

Theorem: The simplex method is exponential.

Proof and algorithm: Let \(P := \{x \in {}^{\omega}\mathbb{R}^{n} : Ax \le b, b \in {}^{\omega}\mathbb{R}^{m}, A \in {}^{\omega}\mathbb{R}^{m \times n}, m, n \in {}^{\omega}\mathbb{N}^{*}\}\) be the feasible domain of the LP max \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in P\}\). By taking the dual or setting \(x := {x}^{+} - {x}^{-}\) with \({x}^{+}, {x}^{-} \ge 0\), we obtain \(x \ge 0\). We first solve max \(\{-z : Ax - b \le {(z, ..., z)}^{T} \in {}^{\omega}\mathbb{R}^{m}, z \ge 0\}\) to obtain a feasible \(x\) when \(b \ge 0\) does not hold. Initial and target value are \(z := |\text{min } \{{b}_{1}, ..., {b}_{m}\}|\) and \(z = 0\). We begin with \(x := 0\) as in the first case. Pivoting if necessary, we may assume that \(b \ge 0\).

Let \(i, j, k \in {}^{\omega}\mathbb{N}^{*}\) and let \({a}_{i}^{T}\) the \(i\)-th row vector of \(A\). If \({d}_{j} \le 0\) for all \(j\), the LP is solved. If for some \({d}_{j} > 0 \; ({d}_{j} = 0)\), \({a}_{ij} \le 0\) for all \(i\), the LP is positively unbounded (for now, we may drop \({d}_{j}\) and \({A}_{.j}\) as well as \({b}_{i}\) and \({a}_{i}\), but only when \({a}_{ij} < 0\) holds). The inequality \({a}_{ij}{x}_{j} \ge 0 > {b}_{i}\) for all \(j\) has no solution, too. If necessary, divide all \({a}_{i}^{T}x \le {b}_{i}\) by \(||{a}_{i}||\) and all \({d}_{j}\) and \({a}_{ij}\) by the minimum of \(|{a}_{ij}|\) such that \({a}_{ij} \ne 0\) for each \(j\). This will be reversed later. If necessary, renormalise by \(||{a}_{i}||\).

We may always remove redundant constraints (with \({a}_{i} \le 0\)). Select for each \({d}_{j} > 0\) and non-base variable \({x}_{j}\) the minimum ratio \({b}_{k}/{a}_{kj}\) for \({a}_{ij} > 0\). The variables with \({}^{*}\) are considered in the next step. The next potential vertex is given by \({x}_{j}^{*} = {x}_{j} + {b}_{k}/{a}_{kj}\) for feasible \({x}^{*}\). To select the steepest edge, select the pivot \({a}_{kj}\) corresponding to \({x}_{j}\) that maximises \({d}^{T}\Delta x/||\Delta x||\) or \({d}_{j}^{2}/(1 + ||{A}_{.j}{||}^{2})\) for \(\Delta x := {x}^{*} - x\) in the \(k\)-th constraint.

If there are multiple maxima, select max\({}_{k,j} {d}_{j}{b}_{k}/{a}_{kj}\) according to the rule of best pivot value or alternatively (less well) the smallest angle min \({(1, ..., 1)}^{T}d^{*}/||(\sqrt{n}) d^{*}||\). If we cannot directly maximise the objective function, we relax (perturb) the constraints with \({b}_{i} = 0\) by the same, minimal modulus. These do not need to be written into the tableau: We simply set \({b}_{i} = ||{a}_{i}||\).

If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier \({b}_{i}\) by \(||{a}_{i}||\). The cost of eliminating a multiple vertex, after which we revert the relaxation, corresponds to an LP with \(d > 0\) and \(b = 0\). Along the chosen path, the objective function increases (effectively) strictly monotonically. We can then simply calculate \({d}_{j}^{*}, {a}_{ij}^{*}\) and \({b}_{i}^{*}\) using the rectangle rule (cf. [775], p. 63).

In the worst-case scenario, the simplex method is not polynomial despite the diameter theorem for polytopes under any given set of pivoting rules, since an exponential "drift" can be constructed with Klee-Minty or Jeroslow polytopes, or others, creating a large deviation from the shortest path by forcing the selection of the least favourable edge. This is consistent with existing proofs. The result follows.\(\square\)

Theorem: The intex method solves every solvable LP in \(\mathcal{O}({\vartheta}^{3})\).

Proof and algorithm: First, we normalise and scale \({b}^{T}y - {d}^{T}x \le 0, Ax \le b\) and \({A}^{T}y \ge d\). Let the height \(h\) and \(v := ({{x}^{T}, {y}^{T})}^{T} \in {}^{\omega}\mathbb{R}^{m+n}\) have the initial values \({h}_{0} := |\text{min } \{{b}_{1}, ..., {b}_{m}, {-d}_{1}, ..., {-d}_{n}\}|\) and 0. We compute the LP min \(\{h \in [0, {h}_{0}] : 0 \le x \in {}^{\omega}\mathbb{R}^{n}, 0 \le y \in {}^{\omega}\mathbb{R}^{m}, {b}^{T}y - {d}^{T}x \le h, Ax - b \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}^{m}, d - {A}^{T}y \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}^{n}\}\) via the (dual) programme min \(\{{b}^{T}y : 0 \le y \in {}^{\omega}\mathbb{R}^{m}, {A}^{T}y \ge d\}\) for the (primal) programme max \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in {P}_{\ge 0}\}\).

We successively interpolate all \({v}_{k}^{*} := (\text{max } {v}_{k} + \text{min } {v}_{k})/2\) (a few times) and extra- or interpolate maybe \({v}^{*}\) from \(v\). We determine the direction \(u \in {}^{\omega}\mathbb{R}^{m+n}\) after solving all LPs min \({h}_{k}\) in \({h}_{k}, {v}_{k} \in {}^{\omega}\mathbb{R}_{\ge 0}\) such that \({u}_{k} := \Delta {v}_{k} \Delta {h}_{k}/\text{min} \; \Delta {h}_{k}\) . We solve min \(h\) in \(h, s \in {}^{\omega}\mathbb{R}_{\ge 0}\) for \({v}^{*} := v + su\) and fix \(v\). The bisection method solves the two-dimensional LPs in \(\mathcal{O}({\vartheta}^{2})\). After interpolating anew, we extrapolate in \(\mathcal{O}(\omega\vartheta) \; ({{v}^{T}, h)}^{T}\) through \({({v}^{*T}, {h}^{*})}^{T}\) into the boundary of the polytope.

Maybe we begin with \(v \ne 0\) or do without any two-dimensional LP. If \(x\) and \(y\) are optimal or if \(h\) cannot be minimised anymore, we stop the procedure. When we cannot leave \(v\), we relax all constraints except \(v \ge 0\) a bit and undo that at the end of one iteration step. Since \(\Delta h\) is for each iteration step in \(\mathcal{O}({\omega\vartheta}^{2})\) roughly half of \(h\), the claim follows by the strong duality theorem ([775], p. 60 - 65).\(\square\)

Remarks: Simplex method and face algorithm ([916], p. 580 f.) may solve the LP faster for small \(m\) and \(n\). We can easily change the current stock of constraints or variables, because the intex method is a non-transforming method and faster than all known (worst-case) LP-solving algorithms in \(\mathcal{O}({\ell\omega}^{4.5})\). We simply have to adjust \(h\) if necessary, because we can initialise additional variables with 0.

Corollary: Every solvable linear system (LS) \(Ax = b\) for \(x \in {}^{\omega}\mathbb{R}^{n}\) can be solved as LP min \(\{h \in [0, \text{max } \{|{b}_{1}|, ..., |{b}_{m}|\}] : \pm(Ax - b) \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}^{m}\}\) in \(\mathcal{O}({\vartheta}^{3}).\square\)

Corollary: Every solvable LP min \(\{h \in [0, 1] : \pm(Ax - \lambda x) \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}^{n}\}\) can determine an eigenvector \(x \in {}^{\omega}\mathbb{R}^{n} \setminus \{0\}\) of the matrix \(A \in {}^{\omega}\mathbb{R}^{n \times n}\) for the eigenvalue \(\lambda \in {}^{\omega}\mathbb{R}\) in \(\mathcal{O}({\vartheta}^{3}).\square\)

Corollary: Let \({\alpha }_{j}\) the \(j\)-th column vector of the matrix \({A}^{-1} \in {}^{\omega}\mathbb{R}^{n \times n}\) and let \({\delta}_{ij}\) the Kronecker delta. Every LS \({A \alpha }_{j} = {({\delta}_{1j}, ..., {\delta}_{nj})}^{T}\) to determine the inverse \({A}^{-1}\) of the regular matrix \(A\) can be solved for \(j = 1, ..., n\) in \(\mathcal{O}({\vartheta}^{3})\). Whether \(A\) is regular can be also determined in \(\mathcal{O}({\vartheta}^{3}).\square\)

Corollary: If only finite floating-point numbers are used and if \(q \in [0, 1]\) is the density of \(A, \mathcal{O}({\vartheta}^{3})\) can above be everywhere replaced by max \(\{\mathcal{O}(qmn), \mathcal{O}(m + n)\}\) or max \(\{\mathcal{O}(q{n}^{2}), \mathcal{O}(n)\}.\square\)

Remarks: The four corollaries can be easily transferred to complex ones. The intex method is numerically very stable, since the initial data are only a little altered. However, its strongly diminishing feasible domain requires particular attention in numerical respects. It can be solved in \({}^{c}\mathbb{R}^{c}\) if modified by distributed computing in \(\mathcal{O}(1)\). It is also well-suited for (mixed) integer problems and (non-)convex (Pareto) optimisation.

Corollary: Every solvable convex programme min \(\{{f}_{1}(x) : x \in {}^{\omega}\mathbb{R}^{n}, {({f}_{2}(x), ..., {f}_{m}(x))}^{T} \le 0\}\) where the \({f}_{i} \in {}^{\omega}\mathbb{R}\) are convex functions for \(i = 1, ..., m\) may be solved by the intex method and two-dimensional bisection or Newton's methods in polynomial runtime, if the number of operands \({x}_{j}\) of the \({f}_{i}\) is \(\le {\omega}^{c-3}\) and if an \(x\) exists so that \({f}_{i}(x) < 0\) for all \(i > 1\) (see [939], p. 589 ff.).\(\square\)

code of simplex method

© 2008-2018 by Boris Haase


Valid XHTML 1.0 • disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top