
In the following section, Set Theory, Topology and Nonstandard Analysis are presupposed, too.
Diameter theorem for polytopes: Every diameter of an \(n\)-dimensional polytope defined by \(m\) constraints for \(m, n \in {}^{\omega}\mathbb{N}_{\ge 2}\) is at most \(2(m + n – 3)\).
Proof: At most \(\acute{m}\) hyperplanes can be assembled into an incomplete cycle of dimension 2 and there are at most \(n – 2\) alternatives sidewards in all remaining dimensions. Overcoming every minimal distance requires at most two edges and yields the factor 2.\(\square\)
Theorem for the Strassen algorithm: For sufficiently big \(n := 2^m, m \in {}^{\nu}\mathbb{N}^*, ß := {}_27\) and \(A \in {}^{\nu}\mathbb{C}^{n \times n}\), the GS shows that the runtime \(T(n) = \mathcal{O}(n^{ß})\)1 is roughly shortened by \(\tilde{3}\) computing \(AA^H =\)\[ \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} A_{11}^H & A_{21}^H \\ A_{12}^H & A_{22}^H \end{pmatrix} = \begin{pmatrix} A_{11}A_{11}^H+A_{12}A_{12}^H & A_{11}A_{21}^H+A_{12}A_{22}^H \\ A_{21}A_{11}^H+A_{22}A_{12}^H & A_{21}A_{21}^H+A_{22}A_{22}^H \end{pmatrix}.\square\]Theorem for the fast matrix multiplication: For \(A\) like \(B = (b_{ij}) \in {}^{\nu}\mathbb{C}^{n \times n}\) and \(n\) as above, \(AB\) can be determined in runtime \(\mathcal{O}(p^2)\) for \(p = mn\) from \(A_{11}(s B_{11} + B_{12}), A_{21}(s B_{11} + B_{12}), A_{12}(s B_{21} + B_{22})\) and \(A_{22}(s B_{21} + B_{22})\) by putting \(s := \) min \(\{2^k >\) max\({}_{i,j} (|a_{ij}|^2, |b_{ij}|^2)\hat{n} : k \in {}^{\nu}\mathbb{Z}\}\) and computing modulo \(s\) (see Matrix Multiplication).\(\square\)
Theorem: For every regular \(C\; (= A^HA)\) and \(n\) as above, \(C^{-1} \in {}^{\nu}\mathbb{C}^{n\times n}\) can be computed via Schur complement2 and identity matrices \(I\) multiplied by \(\varepsilon \in \mathcal{O}(\tilde{\nu})\) in runtime \(T(n) = \mathcal{O}(p^2)\), since \(T(n) = \hat{T}(\check{n}) + \mathcal{O}(p^2) = 4T(\tilde{4}n) +\mathcal{O}(p^2 + \tilde{2}p^2) = … =\mathcal{O}(p^2)\) due to the GS.\(\square\)
Corollary: Every solvable linear system \(Ax = b\) can be solved for regular or for singular \(A^HA\) disturbed by \(\varepsilon\) as above with a precision from \(p \in\,]0, 1[\) in runtime \(\mathcal{O}(p^2).\square\)
Theorem for the FTD: If a regular matrix \(A \in {}^{\nu}\mathbb{R}^{n \times n}\) possesses a structure that can be approximated by diagonal convolution (e. g. Toeplitz-, Hankel-, or block-Toeplitz-like), then there exist (maybe rectangular) Toeplitz matrices \(T_{1}, T_{2}\) with \(A \approx T_{1}T_{2}\), whose entries are determined by diagonal convolution sequences \((x_{i})\) and \((y_{j})\) converging to \(1\). If \(T_{1}^{-1}\) and \(T_{2}^{-1}\) are computed recursively, the above assumption allows any linear system \(A z = b \in {}^{\nu}\mathbb{R}^{n}\) to be solved in runtime \(\mathcal{O}(p^{2})\).
Proof and algorithm: Initialise \(\check{x}_{0}=\max |A_{ij}|\) and successively construct the diagonal sequences \(x_{k},y_{k}\) from arithmetic means of adjacent \(A_{ij}\). Compute recursively (until \(A \approx T_{1}T_{2}\) converges) \[z_{1}=x_{0}y_{0}+x_{1}y_{1}+ \ldots,\quad z_{2}=x_{0}y_{-1}+x_{1}y_{0}+ \ldots\] and so forth. Normalising the columns of \(T_{1}\) and \(T_{2}\) ensures stability; afterwards, backward substitution is carried out along the diagonals. The recursion terminates after at most \(\mathcal{O}(m)\) steps per diagonal.\(\square\)
Corollary: Transferring to complex numbers and eigen values is possible. For \(H = T_{1}^{T}T_{1}\) and \(H z = d\), the FTD provides a symmetric factorisation \(H \approx P^{T}DP\) with \(D\) diagonal and \(P\) block-Toeplitz. This way, the LUX method (Linear Universal Exact) solves linear programmes directly in runtime \(\mathcal{O}(n^{2}).\square\)
Remark: The FTD replaces classical Cholesky or QR factorisations3 by recursive convolution operators and reduces both memory usage and rounding errors. Thus Toeplitz, Hankel, and circulant structures are exactly preserved whenever they occur in \(A\) or can be approximated by convolution. For BigInt or BigFloat arithmetic the scaling is practically linear. In interior-point methods4, the FTD reduces the overall computational effort from \(\mathcal{O}(n^{3})\) to \(\mathcal{O}(p^{2})\).
Corollary: The polynomial \(c_{n}x^{n}+ \ldots +c_{0}\) can be determined from measurement data \((x_{j},y_{j})\) through Toeplitz-based convolution \(y = Xc\) and the orthogonal residual condition \(X^{T}r=0\) in \(\mathcal{O}(n^{2})\). This yields a fast structured interpolation and Padé approximation via discrete Fourier transformation.\(\square\)
Corollary: Fast-Shift-Recursive (FSR) and FTD jointly provide a universal method for solving convex and linear programmes5 with structured coefficient matrices in polynomial time, provided that the number of operands satisfies \(\le \nu^{\nu-3}\) and a strictly feasible solution exists.\(\square\)
© 2008-2025 by Boris Haase
References
- Golub, Gene H.; van Loan, Charles F.: Matrix Computations; 3rd Ed.; 1996; Johns Hopkins University Press; Baltimore, p. 31 ff.
- see Knabner, Peter; Barth, Wolf: Lineare Algebra; 2., überarb. und erw. Aufl.; 2018; Springer; Berlin, p. 223
- cf. Golub, loc. cit., p. 143 – 149 and 223 – 230
- see Vanderbei, Robert J.: Linear Programming; 3rd Ed.; 2008; Springer; New York, p. 287 – 381
- see loc. cit., p. 385 – 435