Homepage of Boris Haase




Radio Computing • Internet and Co • Programming with Stair • Sorting and Searching  (Previous | Next)



Sorting and Searching

Sorting and Searching

Is sorting of values in O(1) possible? Yes it is, in two steps.

Suppose we had no hardware restrictions: The memory be organised in the form of a tree of degree k and have km+1 memory locations with km+1 = 2h with natural h. Each memory location can store every value to be sorted completely. Do values exist multiply, be a consecutive index attached to every value, both constituting the new value. Each value be coded in the k-system and let w. l. o. g. maximum km values to be sorted.

Km data lines branch from the root, each km-1 from the two successors, each km-2 from whose successors etc. Each z-th intermediate node corresponds hierarchically with the z-th digit in the k-system of the value to be sorted. In the leaves alternatively the addresses of the values instead of the values themselves can be stored. Then in one step all the values can be stored in the leaves and in a further one fetched back sorted in the root. It is sufficient to move the values so far into the tree until they separate.

Have we now a limit of one data line per memory location, the values must be read in succession. The attaching of an index for multiple values can now be dropped since multiple values can be managed in a separate list. Every intermediate node memorises which values went through it by setting the corresponding bit for each value to 1. This bit is set to 0 if the intermediate node has no successors to read anymore. One starts with the smallest value at a time for ascending sorting order, with the largest for a descending one.

It results the complexity O(n) (storing, testing the successor, masking the bits, reading). If an additional value is to sort, an additional complexity of O(log n) arises (binary search in a sorted list). If d data lines are used, decreases the complexity of sorting by a factor 1/d. If the values are g-times as long as a memory location can contain, the complexity increases by a factor g.

If the memory is not organised as a tree of degree k, the complexity increases by a factor logkn because then each intermediate node to a value must be read. The more one can find of a tree of degree k in the memory organisation, the lower is the complexity. If the first kp memory locations should be used otherwise, so all intermediate nodes of the p-th level are to search. The complexity increases thereby marginally if the memory is filled good, but otherwise accordingly more with the worst-case factor O(kp).

Currently viable, non-parallel sorting algorithms, based on the swapping of the n input values, have a complexity of O(n log n). This value can be significantly underbid as described by the procedure following radix sort.


In the following the optimal sort algorithm bitsort is presented. It sorts stable with at all times equal time complexity O(n) and needs O(n) additional storage space. Here is assumed that the comparison of two values needs the time complexity O(1), the values are present as bit sequence and all bits of a value can be also processed in O(1). It is furthermore presupposed that the storage is realised as binary tree and the path from root to leaf can be run through also in O(1). If p is the number of parallel steps for parallel bitsort, so the time complexity is O(n/p).

Once a value has been processed, the values inserted so far are available sorted. Inserting an additional value has the time complexity O(1). After each insertion, the m values can be output with time complexity O(m). The storage must be provided with a certain own intelligence: Each storage node must be able to store the value that reaches it independently and to pass it to the next storage node, if it was already visited, resp. to give it back to the central unit, if it was still not visited, after it stored the value. The next storage node at each time results from the corresponding bit of the value within the storage hierarchy.

The bits of a value are at each time - beginning with the leading bit - processed as long until a difference to the previous value or equality is certain. Every value is inserted as leaf into a binary tree. For equality, a leaf stores either the number of the values for this leaf, the address of a list that contains the addresses for this value, or, if the bits of the value were still not processed completely, the value itself or the bit rest. The bit chain is always then broken up at the position where a leaf is built. Very many bits of the same value can be stored with their number. This is advisable, for example, for floating point numbers in exponential representation. The lists can be summarised to one list, if every entry refers to its successor (predecessor) of the same value resp. to null, if there is none. The inner nodes store the value that passed them at last, resp. the common bit chain of two values below the node from that these have branched.


In all the sorting procedures in that values are sorted with a bit length of l = q a with address length a, the time complexity increases per value from O(1) to O(q). If O(r) is the average value of O(q), thus in the time complexity of the sorting procedure at least one O(n) is to be replaced by O(r n). If s is the average of the number of different (!) values that end up in a bucket after sorting up to bit length a and whose bit length is greater than a, then the time complexity of the sorting procedure is O((log s + r) n). The mentioned values of a bucket are stored per bucket in an AVL tree (or in a B-tree, if they are stored in a relatively slower external memory with respect to the main memory). There is worked each time always only with the significant part value of address length of a value.

If one forgoes that the values of each processing of a value are present completely sorted, then the time complexity O(r n) can be maintained for the complete sorting by using two areas of main memory. The values are first sorted up to address length into buckets and then further sorted address length for address length (i.e. part value for part value), bucket for bucket. The processing of the values whose part value previously processed is equal is counted as one run. It is presupposed here that reading and writing of the part values of a value and its address on all runs are on average completely counted as O(r).

The second area of main memory is deleted after each run to be available for the next run, until the bucket is processed. Into the first area of main memory, always only the addresses in the current sort sequence are written back, before the values with equal part value just processed are further sorted. Each value is to be stored contiguously in memory, so that can be accessed quickly to any part value. About the time complexity can be admittedly argued. The values of a bucket and all the addresses should fit into the main memory to avoid relatively long loading times.

Both procedures can be combined well: First the O(r n)-procedure for master data, then the O((log s + r) n)-procedure for transaction data. Sorted data can be stored in O(n) in main memory and AVL- resp. B-tree. Thus especially indices can be quickly built for non-indexed columns in database queries. For large tables, it makes a significant difference, whether an index is built in O(r n) instead of O(r n log n), and then a value can be searched on average in O(log s + r) instead of O(log n + r) or worse. Using skilfully a sufficiently large main memory several indexes can easily be kept simultaneously. For numbers, r is usually quite small, depending on the accuracy, and s also for a corresponding distribution. The same is true for (compressed) words, if the memory size corresponds to that of today commercially available computers. Sorting by merging of arbitrarily many sorted lists is also possible in O(r n).


Discussion of bitsort: It is well suited for parallel programming and stable. It is optimal under the presuppositions made. Bitsort does only what is absolutely required, it is optimal for the rapid ongoing building of indices in (relatively fast growing) databases and for searching (see below). Search trees need not to be balanced. It beats for values with relatively few bits (maximally address length) and corresponding hardware all sorting procedures known to me, if the values have to be read successively. One should consider carefully whether an adaptation of the hardware pays off to achieve the described time complexity. The implementation of an appropriate memory with relatively low intelligence is admittedly simple and saves more complex processors, but by using sufficiently many of these or parallel processing (e.g. parallel use of memory areas) the term log s + r in O((log s + r) n) can be made less than 1, if one only uses AVL- resp. B-trees and merges the result lists afterwards in O(n).


In support of sorting and searching in heterogeneous (distributed) databases (for instance of different database models), a database query language can be used that allows a unified data manipulation, with the aid of dynamically generated data structures from these (e.g. under a uniform interface), as long as the records have a somehow similar structure. It can decide dynamically, based on corresponding demands (possibly anticipatorily), which manipulation steps are to be performed. It must be provided only with the corresponding memory information of the databases with the setup of the database structures.

Changes of the programme code and data (such as the integration of further databases) at run time can be provided, if sufficient security mechanisms (including authorisation checking and setting locks) are available. The indices may be built up, for example parallel or on times of low charge, depending on their size, with different data structures - adapted to the individual presentation of the problem or optimised. Possibly a temporary merging or parallel accessing is sufficient. High-octane standards can facilitate the implementation of such a database language clearly.


Outlook on the graph theory: Let n be the number of nodes of a finite simple connected graph. (If it is not simple, allow by sorting out in O(n) only the smallest parallel edge and no loops. If it is not connected, remove in O(n) all isolated nodes.) Since sorting and parallel adding and comparing is possible in O(1), Dijkstra's algorithm yields each time a shortest path in it in at most O(n), if negative weights are excluded. How can we compute in it a minimum and a maximum spanning tree in O(n log n) or even in O(log n)?

Let each node correspond to a different natural number. Then the edges that spring from it can be each time parallel sorted in at most O(n) or O(1). Let be all edges coloured white at the beginning. Each node adds the smaller and then the greater number of the nodes involved of the edge with the smallest weight of all white-coloured edges to this bit by bit as value that spring from it and stores the smallest value that reaches it. This is at most O(n) or O(1) possible. Then, each node has one nearest adjacent node and the edges, each consisting of two nodes of the same minimum spanning tree, are coloured black. This is also at most O(n) or O(1) possible.

Now the process continues with the white-coloured edges in at most O(log n) steps, where the smallest value of a minimum spanning tree can be determined in at most O(n) or O(1), until an overall minimum spanning tree is created in at most O(n log n) or O(log n). An overall maximum spanning tree can be calculated by subtracting each time edge weights from a sufficiently large constant as new edge weights.

By calculating the shortest paths, e.g. a traffic control system can be simulated in the computer that responds to changing conditions by modifying the edge weights. Packet collisions can be prevented if the nodes are numbered consecutively, and each time the packet gets the first chance that passes through the previous node with the smallest number. Hereby collisions can be minimised by presetting road users interim targets on less-loaded routes.

© 2006-2011 by Boris Haase


Valid XHTML 1.0 • disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top