## Sorting algorithms## Mergesort |

The sorting algorithm Mergesort produces a sorted sequence by sorting its two halves and merging them. With a time complexity of `O`(`n` log(`n`)) Mergesort is optimal.

Similar to Quicksort, the Mergesort algorithm is based on a divide and conquer strategy. First, the sequence to be sorted is decomposed into two halves (*Divide*). Each half is sorted independently (*Conquer*). Then the two sorted halves are merged to a sorted sequence (*Combine*) (Figure 1).

| |

Figure 1: Mergesort(n) | |

The following procedure `mergesort` sorts a sequence `a` from index `lo` to index `hi`.

void mergesort(int lo, int hi) { if (lo<hi) { int m=lo+(hi-lo)/2; mergesort(lo, m); mergesort(m+1, hi); merge(lo, m, hi); } } |

First, index `m` in the middle between `lo` and `hi` is determined ^{1}).Then the first part of the sequence (from `lo` to `m`) and the second part (from `m`+1 to `hi`) are sorted by recursive calls of *mergesort*. Then the two sorted halves are merged by procedure *merge*. Recursion ends when `lo` = `hi`, i.e. when a subsequence consists of only one element.

The main work of the Mergesort algorithm is performed by function *merge*. There are different possibilities to implement this function.

Function *merge* is usually implemented in the following way: The two halves are first copied into an auxiliary array `b`. Then the two halves are scanned by pointers `i` and `j` and the respective next-greatest element at each time is copied back to array `a` (Figure 2).

| |

Figure 2: Merging two sorted halves | |

At the end a situation occurs where one index has reached the end of its half, while the other has not. Then, in principle, the rest of the elements of the corresponding half have to be copied back. Actually, this is not necessary for the second half, since (copies of) the remaining elements are already at their proper places.

The following program is an implementation of this approach.

// Straightforward variant a void merge(int lo, int m, int hi) { int i, j, k; // copy both halves of a to auxiliary array b for (i=lo; i<=hi; i++) b[i]=a[i]; i=lo; j=m+1; k=lo; // copy back next-greatest element at each time while (i<=m && j<=hi) if (b[i]<=b[j]) a[k++]=b[i++]; else a[k++]=b[j++]; // copy back remaining elements of first half (if any) while (i<=m) a[k++]=b[i++]; } |

In Java the short form k++ is equivalent to k=k+1; the statement a[k++]=b[i++] is equivalent to the sequence a[k]=b[i]; k=k+1; i=i+1.

Actually, it is not necessary to copy the second half of array `a` to the auxiliary array `b`. It can remain where it is in array `a` (Fig. 3). In this way, only half as much auxiliary memory space is required, and also only half as much time to copy array elements to `b`. Furthermore, if all elements of the first half have been copied back to `a`, the remaining elements of the second half need not be moved anymore since they are already at their proper places [Som 04].

| |

Figure 3: Merging b with second half of a | |

Observe that when index `k` reaches index `j` all elements of array `b` have been copied back.

The implementation of this approach follows.

// Efficient variant b void merge(int lo, int m, int hi) { int i, j, k; i=0; j=lo; // copy first half of array a to auxiliary array b while (j<=m) b[i++]=a[j++]; i=0; k=lo; // copy back next-greatest element at each time while (k<j && j<=hi) if (b[i]<=a[j]) a[k++]=b[i++]; else a[k++]=a[j++]; // copy back remaining elements of first half (if any) while (k<j) a[k++]=b[i++]; } |

In this approach, the first half of `a` is copied to auxiliary array `b` in its normal order, but the second half is copied to `b` in opposite order [Sed 88]. The result is a sequence that first monotonically increases and then monotonically decreases – a so-called *bitonic* sequence. Now this sequence is scanned by pointers `i` and `j` from both ends. Again, the respective next-greatest element at each time is copied back to array `a`. Copying is completed when `i` and `j` cross, i.e. when `i` > `j` (Figure 4). Observe that it is not necessary that `i` and `j` stay in "their" halves.

| |

Figure 4: Merging of two halves sorted in opposite order | |

The implementation of the bitonic approach follows.

// Bitonic variant c void merge(int lo, int m, int hi) { int i=lo, j=hi, k=lo; // copy first half of array a to auxiliary array b while (i<=m) b[k++]=a[i++]; // copy second half of array a to auxiliary array b, // but in opposite order while (j>m) b[k++]=a[j--]; i=lo; j=hi; k=lo; // copy back next-greatest element at each time // until i and j cross while (i<=j) if (b[i]<=b[j]) a[k++]=b[i++]; else a[k++]=b[j--]; } |

This variant of function *merge* requires in each case exactly the same number of steps – no matter if the input sequence is random, sorted or sorted in opposite direction.

In contrast to the other variants, it is not stable, i.e. it is possible that the original order of equal elements is changed.

The following class *MergeSorter* encapsulates the functions *mergesort* and *merge*.

The method *sort* passes the array to be sorted to array `a`, allocates the auxiliary array `b` and calls *mergesort*.

The command for sorting an array `c` with mergesort is

MergeSorter.sort(c); |

The size of the auxiliary array `b` has to be chosen appropriately according to the implementation of function *merge*, namely `n` for variants a and c and (`n`+1)/2 for variant b.

public class MergeSorter { private static int[] a, b; // auxiliary array b public static void sort(int[] a0) { a=a0; int n=a.length; // according to variant either/or: b=new int[n]; b=new int[(n+1)/2]; mergesort(0, n-1); } private static void mergesort(int lo, int hi) { if (lo<hi) { int m=(lo+hi)/2; mergesort(lo, m); mergesort(m+1, hi); merge(lo, m, hi); } } private static void merge(int lo, int m, int hi) { // insert implementation here } } // end class MergeSorter |

The straightforward variant of function *merge* requires at most 2`n` steps (`n` steps for copying the sequence to the intermediate array `b`, and at most `n` steps for copying it back to array `a`). The time complexity of *mergesort* is therefore

`T`(`n`) 2`n` + 2 `T`(`n`/2) and

`T`(1) = 0

The solution of this recursion yields

`T`(`n`) 2`n` log(`n`) `O`(`n` log(`n`))

Thus, the mergesort algorithm is optimal, since the lower bound for the sorting problem of Ω(`n` log(`n`)) is attained.

In the more efficient variant, function *merge* requires at most 1.5`n` steps (`n`/2 steps for copying the first half of the sequence to the intermediate array `b`, `n`/2 steps for copying it back to array `a`, and at most `n`/2 steps for processing the second half). This yields a running time of *mergesort* of at most 1.5`n` log(`n`) steps.

Algorithm mergesort has a time complexity of Θ(`n` log(`n`)) which is optimal. A drawback of mergesort is that it needs an additional space of Θ(`n`) for the temporary array `b`.

There are different possibilities to implement function *merge*. The most efficient of these is variant b. It requires only half as much additional space, it is faster than the other variants, and it is stable.

[Knu 73] | D.E. Knuth: The Art of Computer Programming, Vol. 3 - Sorting and Searching. Addison-Wesley (1973) | |

[Sed 88] | R. Sedgewick: Algorithms. 2nd edition, Addison-Wesley (1988) | |

[Som 04] | P. Sommerlad: (Persönliche Korrespondenz). Peter Sommerlad, Hochschule für Technik Rapperswil, Schweiz (2004) |

^{1}) The computation lo+(hi-lo)/2 instead of (lo+hi)/2 is safer, since lo+hi may cause an integer overflow when it becomes greater than 2.147.483.647.

Next: [Shellsort] or |

H.W. Lang Hochschule Flensburg lang@hs-flensburg.de Impressum © Created: 19.01.2000 Updated: 29.05.2016 |