Homepage of Boris Haase




Previous

#80: Revision Complete Informatics as well as Renaming Sorting and Searching Theoretical Informatics on 05.09.2019

In the following, we presuppose Set Theory and answer the questions, for \(n \in {}^{c}\mathbb{N}\): How can \(n\) values be sorted in \(\mathcal{O}(1)\)? How can we calculate the minimum or maximum framework of a graph with \(n\) nodes in \(\mathcal{O}(\log \, n)\)? Can the halting problem be solved? Why is \(P = NP\) true?

Let the memory be partially organised in the form of a tree of degree \(k \in {}^{c}\mathbb{N}\) with \(k^{\grave{m}}\) memory locations, where \(k^{\grave{m}} = 2^h\) for \(h, m \in {}^{c}\mathbb{N}\). Each memory location is able to completely accommodate each value to be sorted. If values occur several times, we append a continuous index to each value: This one forms together with the old value the new value. Each value is coded in the base-\(k\) system and there are w. l. o. g. a maximum of \(k^m\) values to sort.

From the tree root, \(k^m\) data lines branch off, \(k^{\acute{m}}\) from each of the two successors, \(k^{m-2}\) from their successors, and so forth. Each \(j\)-th intermediate node corresponds hierarchically with the \(j\)-th position of the value on base \(k\) to be sorted. In the leaves, we can store the values' addresses instead of the values. Then, in one step, all values can be saved in the leaves and, in a further one, can be retrieved, sorted, into the root. It is sufficient to push the values into the tree until they separate.

If there is a restriction of one data line per memory location, we have to read out the values one after the other. Adding an index in the case of multiple values is now obsolete, as multiple values can be managed in a separate list. If a value passes through an intermediate node, this sets its bit to 1, but to 0 if the intermediate node has no successor to read.

The sorting order determines the first value. Saving, checking the successor, masking bits and reading out takes \(\mathcal{O}(n)\). Sorting in an additional value increases the complexity by \(\mathcal{O}(\log \, n)\), since this corresponds to a binary search. It is inversely proportional to the number of tree nodes in the memory. If we use \(d \in {}^{c}\mathbb{N}\) data lines, it decreases by the factor \(\hat{d}\). If the values occupy \(g\) times the number of memory space, it increases by the factor \(\hat{d}\).

If the memory does not contain a tree of degree \(k\), it increases by the factor \(\log_{k}n\), because then every intermediate node must be read to a value. If we want to use the first \(k^p\) memory locations in another way with \(p \in {}^{c}\mathbb{N}\), all intermediate nodes of the \(p\)-th stage must be searched through. The complexity increases marginally if the memory is well filled, otherwise, by the worst-case factor \(\mathcal{O}(k^p)\).

The optimal sorting algorithm bit sort will now be presented. It sorts stably with the time complexity \(\mathcal{O}(n)\) and requires \(\mathcal{O}(n)\) additional storage space. We assume that the comparison of two values requires the time complexity \(\mathcal{O}(1)\), the values are available as a bit sequence, and all bits of a value can also be processed in \(\mathcal{O}(1)\). We further assume that the memory is partially realised as a binary tree and that the path from root to leaf can also be traversed in \(\mathcal{O}(1)\).

If \(p \in {}^{c}\mathbb{N}\) is the number of parallel steps for a parallel bit sort, the time complexity is \(\mathcal{O}(\hat{p} n)\). Each insertion into the always sorted tree or output of a value has the time complexity \(\mathcal{O}(1)\). The memory is provided with the following intelligence: each tree node stores the value reaching it independently and can pass it on to the next tree node if it has already been visited, or, otherwise, can return its address to the central unit. The root does not correspond to any bit value.

We start with the most significant bit of a value. Each of its consecutively processed bits determines the next node within the binary tree. If all bits have been processed or if there is a difference from the compared bit and the associated node has not yet been visited, we store the non-tree address of the remaining bit string of the value in the following node if any, or the number of values for this node, if the sequence of equal values does not matter.

The time complexity of all methods which sort values using a bit length \(b = q a \in {}^{c}\mathbb{N}\) alongside an address length \(a \in {}^{c}\mathbb{N}\) increases from \(\mathcal{O}(1)\) to \(\mathcal{O}(q)\) for each value. If \(r\) is the average value of \(q\), at least one \(\mathcal{O}(n)\) must be replaced by \(\mathcal{O}(r n)\) in the time complexity of the sorting procedure. If \(s \in {}^{c}\mathbb{N}\) is the average number of different (!) values that end up in a bucket after sorting to bit length \(a\) and whose bit length is greater than \(a\), then the time complexity of the procedure is \(\mathcal{O}((\log \, s + r) n)\).

In the process, the mentioned values of a bucket are stored in one AVL tree per bucket or in a B-tree, if we store them in a memory which is relatively slow compared to the main memory. There, we always work only with the significant part value of the address length of a value. If we dispense with completely sorted values after every processing of a value, the complete sort is temporally reduced to \(\mathcal{O}(r n)\) by using two main memory areas.

The second main memory area is cleared after each pass so as to be available for the next pass until a bucket has been processed. In the first main memory area, we always just write back the addresses in the current sorting order before further sorting the values with the same part value just processed. Each value must be stored contiguously in memory, so that we can access each part value quickly.

The use of the \(\mathcal{O}(r n)\) method for the master data and the \(\mathcal{O}((\log \, s + r) n)\) method for the update data both combine well. In particular, table indexes can be created quickly. For large tables, it makes a significant difference whether an index is created in \(\mathcal{O}(r n)\) instead of in \(\mathcal{O}(r n \, \log n)\), and whether a value is then on average searched for in \(\mathcal{O}(\log \, s + r)\) instead of in \(\mathcal{O}(\log \, n + r)\) or worse. Sorting by mixing sorted lists is also possible in \(\mathcal{O}(r n)\).

Discussion of bit sort: it works well for parallel programming. It is optimal under the given conditions both for the rapid ongoing building of table indexes in maybe relatively quickly growing databases and for search. Search trees do not need to be balanced. If all values are to be read successively and none of them is too large, its speed exceeds all sorting methods known today. Its hardware implementation is rather unusual.

Let \(n \in {}^{c}\mathbb{N}\) be the number of consecutively numbered nodes of a finite simple connected graph. (If it is not simple, we allow only the smallest parallel edge and no loops by sorting out in \(\mathcal{O}(n)\). If it is not connected, we remove all isolated nodes in \(\mathcal{O}(n)\).) If negative weights are excluded, Dijkstra's algorithm needs \(\mathcal{O}(n)\) steps for every minimum path in the graph, since sorting as well as parallel adding and comparing is possible in \(\mathcal{O}(1)\).

Its minimum or maximum framework can be calculated in \(\mathcal{O}(n \, \log \, n)\) or even in \(\mathcal{O}(\log \, n)\). The edges of a node can be sorted in parallel in \(\mathcal{O}(n)\) or \(\mathcal{O}(1)\). Let all edges be coloured white at the beginning. Each node appends the smaller and then the larger number of participating nodes of the edge with the smallest weight of all the white-coloured edges that emanate from it to this bit by bit, as a value. It stores the smallest value that reaches it. This is possible in \(\mathcal{O}(n)\) or \(\mathcal{O}(1)\).

Blackening of the edges between every two nodes of the same minimal framework requires at most the same time. Each node then has a nearest neighbouring node. If we continue with the white coloured edges in \(\mathcal{O}(\log \, n)\) steps, the smallest value of a minimal framework can be determined in \(\mathcal{O}(n)\) or \(\mathcal{O}(1)\) until the entire minimal framework is formed in \(\mathcal{O}(n \, \log \, n)\) or \(\mathcal{O}(\log \, n)\). For a maximum framework, we subtract all edge weights from a sufficiently large constant.

Theorem: Every Turing machine with a well-defined calculation rule stops at some point.

Proof: If an (infinite) continuation is neither implicitly nor explicitly prescribed, every Turing machine must stop. A self-reference that commands and prohibits this at the same time is nonsensical.\(\square\)

Remark: The comparable barber paradox makes the last sentence of the proof intelligible (see [968], p. 324). The halting problem is thus newly answered with a positive response. The \(P\) versus \(NP\) problem is solved by the

Theorem: If \(\mathcal{L}\) is the set of all (type 0) languages \(L_k\) of finite words with maximum length \(c\) for inconcrete \(|\mathcal{L}| = 2^{({c}^{\grave{c}}-1)/\acute{c}}\) (using the geometric series) and \(k \in \{1, , |\mathcal{L}|\}\), then \(P = NP = \mathcal{L}\).

Proof: Since every deterministic Turing machine can also be considered non-deterministic (cf. [972], p. 162, 318 and 361 f.), \(P \subseteq NP\) is true. Because \(P\) contains all languages \(L_k\), and \(NP\) only those ones, the claim follows.\(\square\)

© 05.09.2019 by Boris Haase


Valid XHTML 1.0 • disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top