In 1960, K. A. Hoar developed the method of quick sorting of information, which became the most famous. Today it is widely used in programming, as it has a lot of positive properties: it can be used for general cases, requires a small increase in additional memory, is compatible with different types of lists, and is convenient for implementation. But there are also disadvantages that quick sorting has: when used in a job, many errors are made and it is somewhat unstable.
However, this is the most studied version. After the appearance of the first calculations of Hoar, many began to study it closely. A large base was created on the theoretical issues of finding the time spent on work, which was supported by empirical data. There were real suggestions for improving the basic algorithm and increasing the speed of work.
Quick sorting is very common, it can be found everywhere. Based on it, the TList.Sort method is implemented, which exists in all versions (except 1) of Delphi, a function of the library of time spent on execution, qsort in C ++.
The basic principle of work can be formulated as "divide and conquer." The list is divided into two groups and sorting is performed for each part by itself. It follows that more attention needs to be paid to the separation process, during which the following occurs: the basic element is determined and the entire list is rearranged relative to it. To the left is a group of candidates whose values ββare smaller, to the right are transferred all others. It turns out that the main element in the sorted list is located in its rightful place. The next step is to call the recursive sort function for both sides of the elements relative to the base. The work process ends only when the list contains only one element, that is, it will be sorted. Thus, in order to master such a programming function as quick sorting, one needs to know the operation of lower-level algorithms: a) selection of the basic element; b) the most efficient permutation of the list to obtain two sets with smaller and larger values.
Let's get acquainted with the principles of the first. When choosing a base element, ideally, the middle one should be selected from the list. Then, when broken, it will be divided into two equal halves. Only calculating the average value in the list is very difficult, so even the fastest sorting bypasses this calculus. But also the choice of the main element with a maximum or minimum value is also not the best option. In the case of such a definition, one of the created lists will be guaranteed to be empty, and the second will be full. Hence the conclusion that as the basic element should be chosen one that is closer to the average, but further from the maximum and minimum.
Once you have made your choice, you can proceed to the partitioning algorithm. These are the so-called internal quicksort cycles. Everything is built on two fast-performing indexes: the first will go through the elements from left to right, the second, on the contrary, from right to left. The execution operation on the right begins: the index goes through the list and compares all the values ββwith the main one. A cycle is considered complete if there is an element less than or equal to the base. That is, a comparison occurs and the index value decreases. On the left side, the work is completed when finding a greater or equal value. And here the value when comparing increases.
At this stage of the partitioning algorithm, which contains quick sorting, two situations can arise. The first is that the index on the left will be less than the right. This indicates an error, that is, the items that were pointed to are in the list in the wrong order. The way out is to change their places. The second situation is when both columns are equal or intersected. This indicates a successful separation of the list, that is, the work can be considered finished.