Homepage of Boris Haase




Euclidean Geometry • Equitable Distributing • Linear Programming • Set Theory • Nonstandard Analysis • Advice • Topology • Number Theory • Calculation of Times  (Previous | Next)



Linear Programming

Linear Programming

In the following section, we presuppose Set Theory and Nonstandard Analysis. We solve linear programmes (LPs) by the exponential simplex and the polynomial intex method (inter-/extrapolation).

Diameter theorem for polytopes: The diameter of an \(n\)-dimensional polytope defined by \(m\) constraints with \(m, n \in {}^{\omega}\mathbb{N}_{\ge2}\) is at most \(2(m + n - 3)\).

Proof: We can assemble at most \(m - 1\) hyperplanes into an incomplete cycle (of dimension 2) and have to consider \(n - 2\) alternatives sidewards (in the remaining dimensions). Since we can pass each section with a maximum of two edges, the factor is 2. This theorem can be extended to polyhedra analogously by dropping the requirement of finiteness.\(\square\)

Definition: Let \(\omega\) be the limit, to which all variables in Landau notation tend, and \(\vartheta := \omega \; ln \, \omega\). A method is polynomial (exponential) if its computation time in seconds and (or) its memory consumption in bits is \(\mathcal{O}({\omega}^{\mathcal{O}(1)}) (\mathcal{O}({e}^{|\mathcal{O}(\omega)|}))\).

Theorem: The simplex method is exponential.

Proof and algorithm: Let \(P := \{x \in {}^{\omega}\mathbb{R}^{n} : Ax \le b, b \in {}^{\omega}\mathbb{R}^{m}, A \in {}^{\omega}\mathbb{R}^{m \times n}, m, n \in {}^{\omega}\mathbb{N}^{*}\}\) be the feasible domain of the LP max \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in P\}\). By taking the dual or setting \(x := {x}^{+} - {x}^{-}\) with \({x}^{+}, {x}^{-} \ge 0\), we obtain \(x \ge 0\). We first solve max \(\{-z : Ax - b \le {(z, ..., z)}^{T} \in {}^{\omega}\mathbb{R}^{m}, z \ge 0\}\) to obtain a feasible \(x\) when \(b \ge 0\) does not hold. Initial and target value are \(z := |\text{min} \{{b}_{1}, ..., {b}_{m}\}|\) and \(z = 0\). We begin with \(x := 0\) as in the first case. Pivoting if necessary, we may assume that \(b \ge 0\).

Let \(i, j, k \in {}^{\omega}\mathbb{N}^{*}\) and let \({a}_{i}^{T}\) the \(i\)-th row vector of \(A\). If \({d}_{j} \le 0\) for all \(j\), the LP is solved. If for some \({d}_{j} > 0 \; ({d}_{j} = 0)\), \({a}_{ij} \le 0\) for all \(i\), the LP is positively unbounded (for now, we may drop \({d}_{j}\) and \({A}_{.j}\) as well as \({b}_{i}\) and \({a}_{i}\), but only when \({a}_{ij} < 0\) holds). The inequality \({a}_{ij}{x}_{j} \ge 0 > {b}_{i}\) for all \(j\) has no solution, too. If necessary, divide all \({a}_{i}^{T}x \le {b}_{i}\) by \(||{a}_{i}||\) and all \({d}_{j}\) and \({a}_{ij}\) by the minimum of \(|{a}_{ij}|\) such that \({a}_{ij} \ne 0\) for each \(j\). This will be reversed later. If necessary, renormalise by \(||{a}_{i}||\).

We may always remove redundant constraints (with \({a}_{i} \le 0\)). Select for each \({d}_{j} > 0\) and non-base variable \({x}_{j}\) the minimum ratio \({b}_{k}/{a}_{kj}\) for \({a}_{ij} > 0\). The variables with \({}^{*}\) are considered in the next step. The next potential vertex is given by \({x}_{j}^{*} = {x}_{j} + {b}_{k}/{a}_{kj}\) for feasible \({x}^{*}\). To select the steepest edge, select the pivot \({a}_{kj}\) corresponding to \({x}_{j}\) that maximises \({d}^{T}\Delta x/||\Delta x||\) or \({d}_{j}^{2}/(1 + ||{A}_{.j}{||}^{2})\) for \(\Delta x := {x}^{*} - x\) in the \(k\)-th constraint.

If there are multiple maxima, select \(\underset{k,j}{\mathop{\max \,}}\,{{d}_{j}}{{b}_{k}}/{{a}_{kj}}\) according to the rule of best pivot value or alternatively (less well) the smallest angle min \({(1, ..., 1)}^{T}d^{*}/||(\sqrt{n}) d^{*}||\). If we cannot directly maximise the objective function, we relax (perturb) the constraints with \({b}_{i} = 0\) by the same, minimal modulus. These do not need to be written into the tableau: We simply set \({b}_{i} = ||{a}_{i}||\).

If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier \({b}_{i}\) by \(||{a}_{i}||\). The cost of eliminating a multiple vertex, after which we revert the relaxation, corresponds to an LP with \(d > 0\) and \(b = 0\). Along the chosen path, the objective function increases (effectively) strictly monotonically. We can then simply calculate \({d}_{j}^{*}, {a}_{ij}^{*}\) and \({b}_{i}^{*}\) using the rectangle rule (cf. [775], p. 63).

In the worst-case scenario, the simplex method is not polynomial despite the diameter theorem for polytopes under any given set of pivoting rules, since an exponential "drift" can be constructed with Klee-Minty or Jeroslow polytopes, or others, creating a large deviation from the shortest path by forcing the selection of the least favourable edge. This is consistent with existing proofs. The result follows.\(\square\)

Theorem: The intex method solves every solvable LP in \(\mathcal{O}({\vartheta}^{3})\).

Proof and algorithm: We compute the LP max \(\{h \in {}^{\omega}\mathbb{R}_{\ge 0} : {b}^{T}y \le {d}^{T}x, Ax \le b, {A}^{T}y \ge d , 0 \le x \in {}^{\omega}\mathbb{R}^{n}, 0 \le y \in {}^{\omega}\mathbb{R}^{m}\}\) via the (dual) programme min \(\{{b}^{T}y : 0 \le y \in {}^{\omega}\mathbb{R}^{m}, {A}^{T}y \ge d\}\) for the (primal) programme max \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in {P}_{\ge 0}\}\). Let \(v := ({{x}^{T}, {y}^{T})}^{T} \in {}^{\omega}\mathbb{R}^{m+n}\) and the initial value \(v := 0\). First, we normalise and scale \({b}^{T}y - {d}^{T}x \le 0, Ax \le b\) and \({A}^{T}y \ge d\). Let the height \(h\) be always a \({-x}^{T}{a}_{i}\) or \({y}^{T}{A}_{.j}\) for a fix \(i\) or \(j\) such that its related goal is \(|\text{min} \{{b}_{1}, ..., {b}_{m}, {-d}_{1}, ..., {-d}_{n}\}|\).

We successively interpolate all \({v}_{k}^{*} := (\text{max} \; {v}_{k} + \text{min} \; {v}_{k})/2\) (a few times) and extra- or interpolate maybe \({v}^{*}\). We determine the direction \(u \in {}^{\omega}\mathbb{R}^{m+n}\) after solving all LPs max \({h}_{k}\) in \({h}_{k}, {v}_{k} \in {}^{\omega}\mathbb{R}_{\ge 0}\) such that \({u}_{k} := \Delta {v}_{k} \Delta {h}_{k}/\text{max} \; \Delta {h}_{k}\) . We solve max \(h\) in \(h, s \in {}^{\omega}\mathbb{R}_{\ge 0}\) for \({v}^{*} := v + su\) and fix \(v\). The bisection method solves the two-dimensional LPs in \(\mathcal{O}({\vartheta}^{2})\). After interpolating anew, we extrapolate in \(\mathcal{O}(\omega\vartheta) \; ({{v}^{T}, h)}^{T}\) through \({({v}^{*T}, {h}^{*})}^{T}\) into the boundary of the polytope.

Maybe we begin with \(v \ne 0\) or do without any two-dimensional LP. If \(x\) and \(y\) are optimal or if \(h\) cannot be maximised anymore, we stop the procedure. When we cannot leave \(v\), we relax all constraints except \(v \ge 0\) a bit and undo that at the end of one iteration step. Since \(\Delta{h}^{*}\) is for each iteration step in \(\mathcal{O}({\omega\vartheta}^{2})\) roughly half of \(\Delta h\), the claim follows by the strong duality theorem ([775], p. 60 - 65).\(\square\)

Remarks: Simplex method and face algorithm ([916], p. 580 f.) may solve the LP faster for small \(m\) and \(n\). We can easily change the current stock of constraints or variables, because the intex method is a non-transforming method and faster than all known (worst-case) LP-solving algorithms in \(\mathcal{O}({\vartheta\omega}^{3.5})\). We simply have to adjust \(h\) if necessary, because we can initialise additional variables with 0.

Corollary: Every solvable linear system (LS) \(Ax = b\) for \(x \in {}^{\omega}\mathbb{R}^{n}\) can be solved in \(\mathcal{O}({\vartheta}^{3})\) if we write it as LP max \(\{h \in {}^{\omega}\mathbb{R}_{\ge 0} : \pm Ax \le \pm b\}.\square\)

Corollary: Every solvable LP max \(\{h \in {}^{\omega}\mathbb{R}_{\ge 0} : \pm Ax \le \pm \lambda x\}\) can determine an eigenvector \(x \in {}^{\omega}\mathbb{R}^{n} \setminus \{0\}\) of the matrix \(A \in {}^{\omega}\mathbb{R}^{n \times n}\) for the eigenvalue \(\lambda \in {}^{\omega}\mathbb{R}\) in \(\mathcal{O}({\vartheta}^{3})\).\(\square\)

Corollary: Let \({\alpha }_{j}\) the \(j\)-th column vector of the matrix \({A}^{-1} \in {}^{\omega}\mathbb{R}^{n \times n}\) and let \({\delta}_{ij}\) the Kronecker delta. Every LS \({A \alpha }_{j} = {({\delta}_{1j}, ..., {\delta}_{nj})}^{T}\) to determine the inverse \({A}^{-1}\) of the regular matrix \(A\) can be solved for \(j = 1, ..., n\) in \(\mathcal{O}({\vartheta}^{3})\). Whether \(A\) is regular can be also determined in \(\mathcal{O}({\vartheta}^{3}).\square\)

Remarks: The three corollaries can be easily transferred to complex ones. The intex method is numerically very stable, since the initial data are only a little altered, and can be solved if modified by distributed computing in \(\mathcal{O}(1)\). It may also be applied to (mixed) integer problems and branch-and-bound, in particular for nonconvex optimisation. It can easily be extended to convex programmes (with a vector-valued objective function \({f}_{1})\):

Corollary: Every solvable convex programme min \(\{{f}_{1}(x) : x \in {}^{\omega}\mathbb{R}^{n}, {({f}_{2}(x), ..., {f}_{m}(x))}^{T} \le 0\}\) where the \({f}_{i} \in {}^{\omega}\mathbb{R}\) are convex functions may be solved by the intex method and two-dimensional bisection or Newton's methods in polynomial runtime, if the number of operands \({x}_{j}\) of the \({f}_{i}\) is \(\le {\omega}^{\acute{c}-2}\) and if an \(x\) exists so that \({f}_{i}(x) < 0\) for all \(i > 1\) (see [939], p. 589 ff.).\(\square\)

code of simplex method

© 2008-2018 by Boris Haase


Valid XHTML 1.0 • disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top