Linear Programming

Linear Programming
Linear Programming

In the following section, Set Theory, Topology and Nonstandard Analysis are presupposed, too.

Diameter theorem for polytopes: Every diameter of an \(n\)-dimensional polytope defined by \(m\) constraints for \(m, n \in {}^{\omega}\mathbb{N}_{\ge 2}\) is at most \(2(m + n – 3)\).

Proof: At most \(\acute{m}\) hyperplanes can be assembled into an incomplete cycle of dimension 2 and there are at most \(n – 2\) alternatives sidewards in all remaining dimensions. Overcoming every minimal distance requires at most two edges and yields the factor 2.\(\square\)

Theorem for the Strassen algorithm: For sufficiently big \(n := 2^m, m \in {}^{\nu}\mathbb{N}^*, ß := {}_27\) and \(A \in {}^{\nu}\mathbb{C}^{n \times n}\), the GS shows that the runtime \(T(n) = \mathcal{O}(n^{ß})\)1 is roughly shortened by \(\tilde{3}\) computing \(AA^H =\)\[ \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} A_{11}^H & A_{21}^H \\ A_{12}^H & A_{22}^H \end{pmatrix} = \begin{pmatrix} A_{11}A_{11}^H+A_{12}A_{12}^H & A_{11}A_{21}^H+A_{12}A_{22}^H \\ A_{21}A_{11}^H+A_{22}A_{12}^H & A_{21}A_{21}^H+A_{22}A_{22}^H \end{pmatrix}.\square\]Theorem for the fast matrix multiplication: For \(A\) like \(B = (b_{ij}) \in {}^{\nu}\mathbb{C}^{n \times n}\) and \(n\) as above, \(AB\) can be determined in runtime \(\mathcal{O}(mn^2)\) from \(A_{11}(s B_{11} + B_{12}), A_{21}(s B_{11} + B_{12}), A_{12}(s B_{21} + B_{22})\) and \(A_{22}(s B_{21} + B_{22})\) by putting \(s := \) min \(\{2^k >\) max\({}_{i,j} (|a_{ij}|^2, |b_{ij}|^2)\hat{n} : k \in {}^{\nu}\mathbb{Z}\}\) and computing modulo \(s\) (see Matrix Multiplication).\(\square\)

Theorem: For every regular \(B\; (= A^HA)\) and \(n\) as above, \(B^{-1} \in {}^{\nu}\mathbb{C}^{n\times n}\) can be computed via Schur complement2 and identity matrices \(I\) multiplied by \(\varepsilon \in \mathcal{O}(\tilde{\nu})\) in runtime \(T(n) = \mathcal{O}(mn^2)\), since \(T(n) = \hat{T}(\check{n}) + \mathcal{O}(\hat{m}n^2) = 4T(\tilde{4}n) +\mathcal{O}(\hat{m}n^2 + mn^2) = … =\mathcal{O}(mn^2)\) due to the GS.\(\square\)

Corollary: Every solvable linear system \(Ax = b\) can be solved for regular or for singular \(A^HA\) disturbed by \(\varepsilon\) as above with precision \(p \in\,]0, 1[\) in runtime \(\mathcal{O}(mn^2).\square\)

Theorem: LUX methods (Linear Universal Exact) solve every solvable linear programme without redundant constraints max \(\{c^Tx\} =\) min \(\{b^Ty\}\) where \(n \in {}^{\nu}\mathbb{N}^*, x, y \in {}^{\nu}\mathbb{R}_{\geq 0}^n, Ax = b \in {}^{\nu}\mathbb{R}^n, A^Ty = c \in {}^{\nu}\mathbb{R}^n, A \in {}^{\nu}\mathbb{R}^{n\times n}\) in linear notation as \(B = [A, 0, 0; 0, -A^T, 0; -c^T, b^T, 1]\), rk \(B = \hat{n} + 1\) and rk \(A = n\) for \(z \in {}^{\nu}\mathbb{R}_{\geq 0}\) by \((x^T, y^T, z)^T = B^{-1} (b^T, -c^T, 0)^T\) in runtime \(\mathcal{O}({}_2n\,n^2)\) via strong duality theorem3.\(\square\)

Conclusion: For \(k \in {}^{\nu}\mathbb{N}_{\le\hat{n}}^*\), Wolfe’s ‘ad hoc’ method4 can determine a constraint, if any, in runtime \(\mathcal{O}(n^2)\) to be redundant via the linear programme max \(\{B_k (x^T, y^T, z)^T\} = d_k\) (see above). It returns each time to phase I of the simplex method, after real multiples of \(d_k\) were added to the rest of constraints such that the latter equal \(0\) and e. g. \(||B_k||\) is used as selection criterion. The total runtime is \(\mathcal{O}(rn^2)\) with \(r\) specifying the number of redundant constraints.\(\square\)

Corollary: Whether \(A \in {}^{\nu}\mathbb{C}^{n\times n}\) is regular, can be computed in runtime \(\mathcal{O}({}_2n\,n^2)\) (replacing the conventional determinant), since indirectly rk Re \(A \ne n \ne\) rk Im \(A\) follows from \(z \in {}^{\nu}\mathbb{C}^{n*}\) and \(Az = 0 = \overline{Az}.\square\)

Conclusion: The polynomial \(c_{\acute{n}}x^{\acute{n}} + … + c_1x + c_0\) can be determined in \(\mathcal{O}({}_2n\,n^2)\) by measurement \((x_j, y_j) \in {}^{\nu}\mathbb{R}^2\) and residues \(r_j \in {}^{\nu}\mathbb{R}\) for \(j = 1, …, m\) and \((c, x)^T \in {}^{\nu}\mathbb{R}^{\grave{n}}\) from \(y = r + Xc\) and \(X^Tr = 0\) where \(X = \left(x_j^k\right)\) for \(k = 0, …, \acute{n}\) (also under additional constraints5).\(\square\)

Conclusion: The discrete Fourier transform (see Scientific Computing) can determine the Padé approximant \({a(x)}/{b(x)}={{\LARGE{\textbf{+}}}_{m=0}^n {a_m x^m}}/{{\LARGE{\textbf{+}}}_{m=0}^n {b_m x^m}}\) of \(f(x)\) for \(x \in {}^{\nu}\mathbb{R}\) in \(\mathcal{O}({}_2n\,n^2).\square\)

Conclusion: The linear programme max \(\{x : y_{\acute{m}} – xy_m = b_{\acute{m}}, y_n = y_0 = 0, x \le x_0 \in {}^{\nu}\mathbb{R}\}\) can determine for \(m = 1, …, n\) by Horner scheme an \(x \in {}^{\omega}\mathbb{R}\) solving \(n\)-polynomial \(b_{\acute{n}}x^{\acute{n}} + … + b_1x + b_0 = 0\) in \(\mathcal{O}({}_2n\,n)\).

Corollary: LUX methods and two-dimensional bisection or Newton’s methods may solve most solvable convex programmes min \(\{{f}_{1}(x) : x \in {}^{\nu}\mathbb{R}^{n}, {({f}_{2}(x), …, {f}_{m}(x))}^{T} \le 0\}\) where all \({f}_{k} \in {}^{\nu}\mathbb{R}\) are convex functions for \(k = 1, …, m\) in polynomial runtime, if the number of operands \({x}_{j}\) of all \({f}_{k}\) is \(\le {\nu}^{\nu-3}\) and if an \(x\) exists such that \({f}_{k}(x) < 0\) for all \(k > 1\)6.\(\square\)

code of the simplex method

© 2008-2025 by Boris Haase

top

References

  1. Golub, Gene H.; van Loan, Charles F.: Matrix Computations; 3rd Ed.; 1996; Johns Hopkins University Press; Baltimore, p. 31 ff.
  2. see Knabner, Peter; Barth, Wolf: Lineare Algebra; 2., überarb. und erw. Aufl.; 2018; Springer; Berlin, p. 223
  3. see Dantzig, George B.: Lineare Programmierung und Erweiterungen; 1. Aufl.; 1966; Springer; Berlin, p. 150 – 155
  4. see Maros, István: Computational Techniques of the Simplex Method; 1st Ed.; 2003; Kluwer; Boston, p. 234 – 237
  5. cf. Geiger, Carl; Kanzow, Christian: Theorie und Numerik restringierter Optimierungsaufgaben; 1. Aufl.; 2002; Springer; Berlin, p. 196 ff.
  6. cf. loc. cit., p. 56 ff.