University of Phoenix – DAT/305 – Individual: Sorting, Section 8.11Resource: Ch. 8, “Sorting”, of Data Structures: Abstraction and Design Using Java.Complete the following Review Questions within “Exercises for Sections 8.11” subsection in Section 8.11, “The Dutch National Flag” of Ch. 8, “Sorting” in Data Structures: Abstraction and Design Using JavaReview Question #1Review Question #3Review Question #4Submit your pseudocode response to the Assignment Files tab.
University of Phoenix – DAT/305 – Individual: Sorting, Section 8.11Resource: Ch. 8, “Sorting”, of Data Structures: Abstraction and Design Using Java.Complete the following Review Questions within “Exe
Chapter 8 Sorting Chapter Objectives To learn how to use the standard sorting methods in the Java API To learn how to implement the following sorting algorithms: selection sort, insertion sort, Shell sort, merge sort, Timsort, heapsort, and quicksort To understand the difference in performance of these algorithms, and which to use for small arrays, which to use for medium arrays, and which to use for large arrays Sorting is the process of rearranging the data in an array or a list so that it is in increasing (or decreasing) order. Because sorting is done so frequently, computer scientists have devoted much time and effort to developing efficient algorithms for sorting arrays. Even though many languages (including Java) provide sorting utilities, it is still very important to study these algorithms because they illustrate several well-known ways to solve the sorting problem, each with its own merits. You should know how they are written so that you can duplicate them if you need to use them with languages that don’t have sorting utilities. Another reason for studying these algorithms is that they illustrate some very creative approaches to problem solving. For example, the insertion sort algorithm adapts an approach used by card players to arrange a hand of cards; the merge sort algorithm builds on a technique used to sort external data files. Several algorithms use divide-and-conquer to break a larger problem into more manageable subproblems. The Shell sort is a very efficient sort that works by sorting many small sub-arrays using insertion sort, which is a relatively inefficient sort when used by itself. The merge sort and quicksort algorithms are both recursive. Method heapsort uses a heap as its underlying data structure. The final reason for studying sorting is to learn how computer scientists analyze and compare the performance of several different algorithms that perform the same operation. We will cover two quadratic (O(n2)) sorting algorithms that are fairly simple and appropriate for sorting small arrays but are not recommended for large arrays. We will also discuss four sorting algorithms that give improved performance (O(n log n)) on large arrays and one that gives performance that is much better than O(n2) but not as good as O(n log n). Our goal is to provide a sufficient selection of quadratic sorts and faster sorts. A few other sorting algorithms are described in the programming projects. Our expectation is that your instructor will select which algorithms you should study. Sorting 8.1 Using Java Sorting Methods 8.2 Selection Sort 8.3 Insertion Sort 8.4 Comparison of Quadratic Sorts 8.5 Shell Sort: A Better Insertion Sort 8.6 Merge Sort 8.7 Timsort 8.8 Heapsort 8.9 Quicksort 8.10 Testing the Sort Algorithms 8.11 The Dutch National Flag Problem (Optional Topic) Case Study: The Problem of the Dutch National Flag 8.1 Using Java Sorting Methods The Java API java.util provides a class Arrays with several overloaded sort methods for different array types. In addition, the class Collections (also part of the API java.util) contains similar sorting methods for Lists. The methods for arrays of primitive types are based on the quicksort algorithm (Section 8.9), and the methods for arrays of Objects and for Lists are based on the Timsort algorithm (Section 8.7). Both algorithms are O(n log n). Method Arrays.sort is defined as a public static void method and is overloaded (see Table 8.1). The first argument in a call can be an array of any primitive type (although we have just shown int[]) or an array of objects. If the first argument is an array of objects, then either the class type of the array must implement the Comparable interface or a Comparator object must be passed as the last argument (see Section 6.6). A class that implements the Comparable interface must define a compareTo method that determines the natural ordering of its objects. If a Comparator is passed, its compare method will be used to determine the ordering. TABLE 8.1 Methods sort in Classes java.util.Arrays and java.util.Collections Method sort in Class Arrays Behavior public static void sort(int[] items) Sorts the array items in ascending order public static void sort(int[] items, int fromIndex, int toIndex) Sorts array elements items[fromIndex] to items[toIndex] in ascending order public static void sort(Object[] items) Sorts the objects in array items in ascending order using their natural ordering (defined by method compareTo). All objects in items must implement the Comparable interface and must be mutually comparable public static void sort(Object[] items, int fromIndex, int toIndex) Sorts array elements items[fromIndex] to items[toIndex] in ascending order using their natural ordering (defined by method compareTo). All objects must implement the Comparable interface and must be mutually comparable public static void sort(T[] items, Comparator comp) Sorts the objects in items in ascending order as defined by method comp.compare. All objects in items must be mutually comparable using method comp.compare public static void sort(T[] items, int fromIndex, int toIndex, Comparator comp) Sorts the objects in items[fromIndex] to items[toIndex] in ascending order as defined by method comp.compare. All objects in items must be mutually comparable using method comp.compare Method sort in Class Collections Behavior public static > void sort(List list) Sorts the objects in list in ascending order using their natural ordering (defined by method compareTo). All objects in list must implement the Comparable interface and must be mutually comparable public static void sort (List list, Comparator comp) Sorts the objects in list in ascending order as defined by method comp.compare. All objects must be mutually comparable Method sort in Interface List Behavior default void sort(Comparator comp) Sorts the objects in the list in ascending order as defined by method comp.compare. All objects must be mutually comparable For method Collections.sort (see Table 8.1), the first argument must be a collection of objects that implement the List interface (e.g., an ArrayList or a LinkedList). If only one argument is provided, the objects in the List must implement the Comparable interface. Method compareTo is called by the sorting method to determine the relative ordering of two objects. Optionally, a Comparator can be passed as a second argument. Using a Comparator, you can compare objects based on some other information rather than using their natural ordering (as determined by method compareTo). The Comparator object must be the last argument in the call to the sorting method. Rather than rearranging the elements in the List, method sort first copies the List elements to an array, sorts the array using Arrays.sort, and then copies them back to the List. In Java 8, the List interface now contains a sort method. This method passes the list (this) to Collections.sort. In class Arrays, the two methods that use a Comparator are generic methods. Generic methods, like generic classes, have parameters. The generic parameter(s) precede the method type. For example, in the declaration public static void sort(T[] items, Comparator comp) T represents the generic parameter for the sort method and should appear in the method parameter list (e.g., T[] items). For the second method parameter, the notation Comparator means that comp must be an object that implements the Comparator interface for type T or for a superclass of type T. For example, you can define a class that implements Comparator and use it to sort an array of Integer objects or an array of Double objects. SYNTAX Declaring a Generic Method FORM: methodModifiers returnType methodName(methodParameters) EXAMPLE: public static > int binarySearch(T[] items, T target) MEANING: To declare a generic method, list the genericParameters inside the symbol pair <> and between the methodModifiers (e.g., public static) and the return type. The genericParameters can then be used in the specification of the methodParameters. Both methods in class Collections are generic. public static > void sort(List list) The notation > means that generic parameter T must implement the interface Comparable. The method parameter list (the object being sorted) is of type List. EXAMPLE 8.1 If array items stores a collection of integers, the method call Arrays.sort(items, 0, items.length / 2) ; sorts the integers in the first half of the array, leaving the second half of the array unchanged. EXAMPLE 8.2 Let’s assume class Person is defined as follows: public class Person implements Comparable { // Data Fields /* The last name */ private String lastName; /* The first name */ private String firstName; /* Birthday represented by an integer from 1 to 366 */ private int birthDay; // Methods /** Compares two Person objects based on names. The result is based on the last names if they are different; otherwise, it is based on the first names. @param obj The other Person @return A negative integer if this person’s name precedes the other person’s name; 0 if the names are the same; a positive integer if this person’s name follows the other person’s name. */ @Override public int compareTo(Person other) { // Compare this Person to other using last names. int result = lastName.compareTo(other.lastName); // Compare first names if last names are the same. if (result == 0) return firstName.compareTo(other.firstName); else return result; } // Other methods     . . . } Method Person.compareTo compares two Person objects based on their names using the last name as the primary key and the first name as the secondary key (the natural ordering). If people is an array of Person objects, the statement Arrays.sort(people); places the elements in array people in ascending order based on their names. Although the sort operation is O(n log n), the comparison of two names is O(k) where k is the length of the shorter name. EXAMPLE 8.3 You can also use a class that implements Comparator to compare Person objects. As an example, method compare in class ComparePerson compares two Person objects based on their birthdays, not their names. import java.util.Comparator; public class ComparePerson implements Comparator { /** Compare two Person objects based on birth date. @param left The left-hand side of the comparison @param right The right-hand side of the comparison @return A negative integer if the left person’s birthday precedes the right person’s birthday; 0 if the birthdays are the same; a positive integer if the left person’s birthday follows the right person’s birthday. */ @Override public int compare(Person left, Person right) { return left.getBirthDay() – right.getBirthDay(); } } If peopleList is a List of Person objects, the statement Collections.sort(peopleList, new ComparePerson()); places the elements in peopleList in ascending order based on their birthdays. Comparing two birthdays is an O(1) operation. In Java 8, you can pass a lambda expression as a Comparator object to the List.sort method instead of writing a class that implements Comparator. We can sort peopleList with the following statement,where the lambda expression specifies the compare method to be used with items of peopleList. peopleList.sort((p1, p2) -> p1.getBirthday() – p2.getBirthday()); EXERCISES FOR SECTION 8.1 SELF-CHECK Indicate whether each of the following method calls is valid. Describe why it isn’t valid or, if it is valid, describe what it does. Assume people is an array of Person objects and peopleList is a List of Person objects. people.sort(); Arrays.sort(people, 0, people.length – 3); Arrays.sort(peopleList, 0, peopleList.length – 3); Collections.sort(people); Collections.sort(peopleList, new ComparePerson()); Collections.sort(peopleList, 0, peopleList.size() – 3); PROGRAMMING Write a method call to sort the last half of array people using the natural ordering. Write a method call to sort the last half of array people using the ordering determined by class ComparePerson. Write a method call to sort peopleList using the natural ordering. 8.2 Selection Sort Selection sort is a relatively easy-to-understand algorithm that sorts an array by making several passes through the array, selecting the next smallest item in the array each time, and placing it where it belongs in the array. We illustrate all sorting algorithms using an array of integer values for simplicity. However, each algorithm sorts an array of Comparable objects, so the int values must be wrapped in Integer objects. We show the algorithm next, where n is the number of elements in an array with subscripts 0 through n − 1 and fill is the subscript of the element that will store the next smallest item in the array. Selection Sort Algorithm 1. for fill = 0 to n – 2 do 2.     Set posMin to the subscript of the smallest item in the subarray starting at subscript fill. 3.   Exchange the item at posMin with the one at fill. Step 2 involves a search for the smallest item in each subarray. It requires a loop in which we compare each element in the subarray, starting with the one at position fill + 1, with the smallest value found so far. In the refinement of Step 2 shown in the following algorithm (Steps 2.1 through 2.4), we use posMin to store the subscript of the smallest value found so far. We assume that its initial position is fill. Refinement of Selection Sort Algorithm (Step 2) 2.1 Initialize posMin to fill. 2.2 for next = fill + 1 to n – 1 2.3 if the item at next is less than the item at posMin 2.4 Reset posMin to next. First the selection sort algorithm finds the smallest item in the array (smallest is 20) and moves it to position 0 by exchanging it with the element currently at position 0. At this point, the sorted part of the array consists of the new element at position 0. The values to be exchanged are shaded dark in all diagrams. The sorted elements are in light gray. image Next, the algorithm finds the smallest item in the subarray starting at position 1 (next smallest is 30) and exchanges it with the element currently at position 1: image At this point, the sorted portion of the array consists of the elements at positions 0 and 1. Next, the algorithm selects the smallest item in the subarray starting at position 2 (next smallest is 35) and exchanges it with the element currently at position 2: image At this point, the sorted portion of the array consists of the elements at positions 0, 1, and 2. Next, the algorithm selects the smallest item in the subarray starting at position 3 (next smallest is 60) and exchanges it with the element currently at position 3: image The element at position 4, the last position in the array, must store the largest value (largest is 65), so the array is sorted. Analysis of Selection Sort Steps 2 and 3 are performed n − 1 times. Step 3 performs an exchange of items; consequently, there are n − 1 exchanges. Step 2.3 involves a comparison of items and is performed (n – 1 – fill) times for each value of fill. Since fill takes on all values between 0 and n − 2, the following series computes the number of executions of Step 2.3: (n–1)+(n–2)+…+3+2+1 This is a well-known series that can be written in closed form as n×(n−1)2=n22−n2 For very large n, we can ignore all but the most significant term in this expression, so the number of comparisons is O(n2) and the number of exchanges is O(n). Because the number of comparisons increases with the square of n, the selection sort is called a quadratic sort. Code for Selection Sort Listing 8.1 shows the code for selection sort, which follows the algorithm above. LISTING 8.1 SelectionSort.java /** Implements the selection sort algorithm. */ public class SelectionSort { /** Sort the array using selection sort algorithm. @pre table contains Comparable objects. @post table is sorted. @param table The array to be sorted */ public static void sort(Comparable[] table) { int n = table.length; for (int fill = 0; fill < n - 1; fill++) { // Invariant: table[0 . . . fill – 1] is sorted. int posMin = fill; for (int next = fill + 1; next < n; next++) { // Invariant: table[posMin] is the smallest item in // table[fill . . . next - 1]. if (table[next].compareTo(table[posMin]) < 0) { posMin = next; } } // assert: table[posMin] is the smallest item in // table[fill . . . n - 1]. // Exchange table[fill] and table[posMin]. Comparable temp = table[fill]; table[fill] = table[posMin]; table[posMin] = temp; // assert: table[fill] is the smallest item in // table[fill . . . n - 1]. } // assert: table[0 . . . n - 1] is sorted. } } image PROGRAM STYLE Making Sort Methods Generic The code in Listing 8.1 will compile, but it will generate a warning message regarding an unchecked call to compareTo. You can eliminate this warning message by making the sort a generic sort. To accomplish this for the sort above, change the method heading to public static > void sort(T[] table) { where the generic type parameter, T, must implement the Comparable interface. Also, change the data type of variable temp from Comparable to type T, the data type of the array elements. T temp = table[fill]; We will code the other sorting algorithms in this chapter as generic methods. EXERCISES FOR SECTION 8.2 SELF-CHECK Show the progress of each pass of the selection sort for the following array. How many passes are needed? How many comparisons are performed? How many exchanges? Show the array after each pass. 40 35 80 75 60 90 70 75 50 22 How would you modify selection sort to arrange an array of values in decreasing sequence? It is not necessary to perform an exchange if the next smallest element is already at position fill. Modify the selection sort algorithm to eliminate the exchange of an element with itself. How does this affect big-O for exchanges? Discuss whether the time saved by eliminating unnecessary exchanges would exceed the cost of these extra steps. PROGRAMMING Modify the selection sort method to sort the elements in decreasing order and to incorporate the change in Self-Check Exercise 3. Add statements to trace the progress of selection sort. Display the array contents after each exchange. 8.3 Insertion Sort Our next quadratic sorting algorithm, insertion sort, is based on the technique used by card players to arrange a hand of cards. The player keeps the cards that have been picked up so far in sorted order. When the player picks up a new card, the player makes room for the new card and then inserts it in its proper place. The left diagram of Figure 8.1 shows a hand of cards (ignoring suits) after three cards have been picked up. If the next card is an 8, it should be inserted between the 6 and 10, maintaining the numerical order (middle diagram). If the next card is a 7, it should be inserted between the 6 and 8 as shown on the right in Figure 8.1. image FIGURE 8.1 Picking Up a Hand of Cards To adapt this insertion algorithm to an array that has been filled with data, we start with a sorted subarray consisting of the first element only. For example, in the leftmost array of Figure 8.2, the initial sorted subarray consists of only the first value 30 (in element 0). The array element(s) that are in order after each pass are shaded dark, and the elements waiting to be inserted are in gray. We first insert the second element (25). Because it is smaller than the element in the sorted subarray, we insert it before the old first element (30), and the sorted subarray has two elements (25, 30 in second diagram). Next, we insert the third element (15). It is also smaller than all the elements in the sorted subarray, so we insert it before the old first element (25), and the sorted subarray has three elements (15, 25, 30 in third diagram). Next, we insert the fourth element (20). It is smaller than the second and third elements in the sorted subarray, so we insert it before the old second element (25), and the sorted subarray has four elements (15, 20, 25, 30 in the fourth diagram). Finally, we insert the last element (28). It is smaller than the last element in the sorted subarray, so we insert it before the old last element (30), and the array is sorted. The algorithm follows. image FIGURE 8.2 An Insertion Sort Insertion Sort Algorithm 1. for each array element from the second (nextPos = 1) to the last 2. Insert the element at nextPos where it belongs in the array, increasing the length of the sorted subarray by 1 element. To accomplish Step 2, the insertion step, we need to make room for the element to be inserted (saved in nextVal) by shifting all values that are larger than it, starting with the last value in the sorted subarray. Refinement of Insertion Sort Algorithm (Step 2) 2.1 nextPos is the position of the element to insert. 2.2 Save the value of the element to insert in nextVal. 2.3 while nextPos > 0 and the element at nextPos – 1 > nextVal 2.4 Shift the element at nextPos – 1 to position nextPos. 2.5 Decrement nextPos by 1. 2.6 Insert nextVal at nextPos. We illustrate these steps in Figure 8.3. For the array shown on the left, the first three elements (positions 0, 1, and 2) are in the sorted subarray, and the next element to insert is 20. First we save 20 in nextVal and 3 in nextPos. Next we shift the value in position 2 (30) down one position (see the second array in Figure 8.3), and then we shift the value in position 1 (25) down one position (see third array in Figure 8.3). After these shifts (third array), there will temporarily be two copies of the last value shifted (25). The first of these (white backround in Figure 8.3) is overwritten when the value in nextVal is moved into its correct position (nextPos is 1). The four-element sorted subarray is shaded dark on the right of Figure 8.3. image FIGURE 8.3 Inserting the Fourth Array Element Analysis of Insertion Sort The insertion step is performed n − 1 times. In the worst case, all elements in the sorted subarray are compared to nextVal for each insertion, so the maximum number of comparisons is represented by the series 1+2+3+…+(n–2)+(n–1) which is O(n2). In the best case (when the array is already sorted), only one comparison is required for each insertion, so the number of comparisons is O(n). The number of shifts performed during an insertion is one less than the number of comparisons or, when the new value is the smallest so far, the same as the number of comparisons. However, a shift in an insertion sort requires the movement of only one item, whereas in a selection sort, an exchange involves a temporary item and requires the movement of three items. A Java array of objects contains references to the actual objects, and it is these references that are changed. The actual objects remain in the physical locations where they were first created. Code for Insertion Sort Listing 8.2 shows the InsertionSort. We use method insert to perform the insertion step shown earlier. It would be more efficient to insert this code inside the for statement; however, using a method will make it easier to implement the Shell sort algorithm later. The while statement in method insert compares and shifts all values greater than nextVal in the subarray table[0 … nextPos – 1]. The while condition ((nextPos > 0) && (nextVal.compareTo(table[nextPos – 1]) < 0)) causes loop exit if the first element has been moved or if nextVal is not less than the next element to move. It could lead to an out-of-range subscript error if the order of the conditions were reversed. Recall that Java performs short-circuit evaluation. If the left-hand operand of an && operation is false, the right-hand operand is not evaluated. If this were not the case, when nextPos becomes 0, the array subscript would be –1, which is outside the subscript range. Because nextPos is a value parameter, variable nextPos in sort is unchanged. LISTING 8.2 InsertionSort.java /** Implements the insertion sort algorithm. */ public class InsertionSort { /** Sort the table using insertion sort algorithm. @pre table contains Comparable objects. @post table is sorted. @param table The array to be sorted */ public static > void sort(T[] table) { for (int nextPos = 1; nextPos < table.length; nextPos++) { // Invariant: table[0 . . . nextPos - 1] is sorted. // Insert element at position nextPos // in the sorted subarray. insert(table, nextPos); } // End for. } // End sort. /** Insert the element at nextPos where it belongs in the array. @pre table[0 . . . nextPos - 1] is sorted. @post table[0 . . . nextPos] is sorted. @param table The array being sorted @param nextPos The position of the element to insert */ private static > void insert(T[] table, int nextPos) { T nextVal = table[nextPos]; // Element to insert. while (nextPos > 0 && nextVal.compareTo(table [nextPos – 1]) < 0) { table[nextPos] = table[nextPos - 1]; // Shift down. nextPos--; // Check next smaller element. } // Insert nextVal at nextPos. table[nextPos] = nextVal; } } EXERCISES FOR SECTION 8.3 SELF-CHECK Sort the following array using insertion sort. How many passes are needed? How many comparisons are performed? How many exchanges? Show the array after each pass. 40 35 80 75 60 90 70 75 50 22 PROGRAMMING Eliminate method insert in Listing 8.2 and write its code inside the for statement. Add statements to trace the progress of insertion sort. Display the array contents after the insertion of each value. 8.4 Comparison of Quadratic Sorts Table 8.2 summarizes the performance of two quadratic sorts. To give you some idea as to what these numbers mean, Table 8.3 shows some values of n and n2. If n is small (say, 100 or less), it really doesn't matter which sorting algorithm you use. Insertion sort gives the best performance for larger arrays. Insertion sort is better because it takes advantage of any partial sorting that is in the array and uses less costly shifts instead of exchanges to rearrange array elements. In the next section, we discuss a variation on insertion sort, known as Shell sort, that has O(n3/2) or better performance. TABLE 8.2 Comparison of Quadratic Sorts Number of Comparisons Number of Exchanges Best Worst Best Worst Selection sort O(n2) O(n2) O(n) O(n) Insertion sort O(n) O(n2) O(1) O(n2) TABLE 8.3 Comparison of Rates of Growth n n2 n log n 8 64 24 16 256 64 32 1024 160 64 4096 384 128 16,384 896 256 65,536 2048 512 262,144 4608 Since the time to sort an array of n elements is proportional to n2, none of these algorithms is particularly good for large arrays (i.e., n > 100). The best sorting algorithms provide n log n average-case behavior and are considerably faster for large arrays. In fact, one of the algorithms that we will discuss has n log n worst-case behavior. You can get a feel for the difference in behavior by comparing the last column of Table 8.3 with the middle column. Recall from Section 2.1 that big-O analysis ignores any constants that might be involved or any overhead that might occur from method calls needed to perform an exchange or a comparison. However, the tables give you an estimate of the relative performance of the different sorting algorithms. We haven’t talked about storage usage for these algorithms. Both quadratic sorts require storage for the array being sorted. However, there is only one copy of this array, so the array is sorted in place. There are also requirements for variables that store references to particular elements, loop control variables, and temporary variables. However, for large n, the size of the array dominates these other storage considerations, so the extra space usage is proportional to O(1). Comparisons versus Exchanges We have analyzed comparisons and exchanges separately, but you may be wondering whether one is more costly (in terms of computer time) than the other. In Java, an exchange requires your computer to switch two object references using a third object reference as an intermediary. A comparison requires your computer to execute a compareTo method. The cost of a comparison depends on its complexity, but it will probably be more costly than an exchange because of the overhead to call and execute method compareTo. In some programming languages (but not Java), an exchange may require physically moving the information in each object rather than simply swapping object references. For these languages, the cost of an exchange would be proportional to the size of the objects being exchanged and may be more costly than a comparison. EXERCISES FOR SECTION 8.4 SELF-CHECK Complete Table 8.3 for n = 1024 and n = 2048. What do the new rows of Table 8.3 tell us about the increase in time required to process an array of 1024 elements versus an array of 2048 elements for O(n), O(n2), and O(n log n) algorithms? 8.5 Shell Sort: A Better Insertion Sort Next, we describe the Shell sort, which is a type of insertion sort but with O(n3/2) or better performance. Unlike the other algorithms, Shell sort is named after its discoverer, Donald L. Shell (“A High-Speed Sorting Procedure,” Communications of the ACM, Vol. 2, No. 7 [1959], pp. 30–32). You can think of the Shell sort as a divide-and-conquer approach to insertion sort. Instead of sorting the entire array at the start, the idea behind Shell sort is to sort many smaller subarrays using insertion sort before sorting the entire array. The initial subarrays will contain two or three elements, so the insertion sorts will go very quickly. After each collection of subarrays is sorted, a new collection of subarrays with approximately twice as many elements as before will be sorted. The last step is to perform an insertion sort on the entire array, which has been presorted by the earlier sorts. As an example, let’s sort the following array using initial subarrays with only two and three elements. We determine the elements in each subarray by setting a gap value between the subscripts in each subarray. We will explain how we pick the gap values later. We will use an initial gap of 7. image A gap of 7 means the first subarray has subscripts 0, 7, 14 (element values 40, 75, 57, medium shade); the second subarray has subscripts 1, 8, 15 (element values 35, 55, 65, darkest shade); the third subarray has subscripts 2, 9 (element values 80, 90, lightest shade); and so on. There are seven subarrays. We start the process by inserting the value at position 7 (value of gap) into its subarray (elements at 0 and 7). Next, we insert the element at position 8 into its subarray (elements at 1 and 8). We continue until we have inserted the last element (at position 15) in its subarray (elements at 1, 8, and 15). The result of performing insertion sort on all seven subarrays with two or three elements follows: image Next, we use a gap of 3. There are only three subarrays, and the longest one has six elements. The first subarray has subscripts 0, 3, 6, 9, 12, 15; the second subarray has subscripts 1, 4, 7, 10, 13; the third subarray has subscripts 2, 5, 8, 11, 14. image We start the process by inserting the element at position 3 (value of gap) into its subarray. Next, we insert the element at position 4, and so on. The result of all insertions follows: image Finally, we use a gap of 1, which performs an insertion sort on the entire array. Because of the presorting, it will require 1 comparison to insert 34, 1 comparison to insert 45 and 62, 4 comparisons to insert 35, 2 comparisons to insert 55, 1 comparison to insert 65, 3 comparisons to insert 57, 1 comparison to insert 60, and only 1 or 2 comparisons to insert each of the remaining values except for 80 (3 comparisons). The algorithm for Shell sort follows. Steps 2 through 4 correspond to the insertion sort algorithm shown earlier. Because the elements with subscripts 0 through gap – 1 are the first elements in their subarrays, we begin Step 4 by inserting the element at position gap instead of at position 1 as we did for the insertion sort. Step 1 sets the initial gap between subscripts to n / 2, where n is the number of array elements. To get the next gap value, Step 6 divides the current gap value by 2.2 (chosen by experimentation). We want the gap to be 1 during the last insertion sort so that the entire array will be sorted. Step 5 ensures this by resetting gap to 1 if it is 2. Shell Sort Algorithm 1. Set the initial value of gap to n / 2. 2. while gap > 0 3. for each array element from position gap to the last element 4. Insert this element where it belongs in its subarray. 5. if gap is 2, set it to 1. 6. else gap = gap / 2.2. Refinement of Step 4, the Insertion Step 4.1 nextPos is the position of the element to insert. 4.2 Save the value of the element to insert in nextVal. 4.3 while nextPos > gap and the element at nextPos – gap > nextVal 4.4 Shift the element at nextPos – gap to position nextPos. 4.5 Decrement nextPos by gap. 4.6 Insert nextVal at nextPos. Analysis of Shell Sort You may wonder why Shell sort is an improvement over regular insertion sort, because it ends with an insertion sort of the entire array. Each later sort (including the last one) will be performed on an array whose elements have been presorted by the earlier sorts. Because the behavior of insertion sort is closer to O(n) than O(n2) when an array is nearly sorted, the presorting will make the later sorts, which involve larger subarrays, go more quickly. As a result of presorting, only 19 comparisons were required to perform an insertion sort on the last 15-element array shown in the previous section. This is critical because it is precisely for larger arrays where O(n2) behavior would have the most negative impact. For the same reason, the improvement of Shell sort over insertion sort is much more significant for large arrays. A general analysis of Shell sort is an open research problem in computer science. The performance depends on how the decreasing sequence of values for gap is chosen. It is known that Shell sort is O(n2) if successive powers of 2 are used for gap (i.e., 32, 16, 8, 4, 2, 1). If successive values for gap are of the form 2k – 1 (i.e., 31, 15, 7, 3, 1), however, it can be proven that the performance is O(n3/2). This sequence is known as Hibbard’s sequence. There are other sequences that give similar or better performance. We have presented an algorithm that selects the initial value of gap as n/2 and then divides by 2.2 and truncates to the next lowest integer. Empirical studies of this approach show that the performance is O(n5/4) or maybe even O(n7/6), but there is no theoretical basis for this result (M. A. Weiss, Data Structures and Problem Solving Using Java [Addison-Wesley, 1998], p. 230). Code for Shell Sort Listing 8.3 shows the code for Shell sort. Method insert has a third parameter, gap. The expression after && ((nextPos > gap – 1) && (nextVal.compareTo(table[nextPos – gap]) < 0)) compares elements that are separated by the value of gap instead of by 1. The expression before && is false if nextPos is the subscript of the first element in a subarray. The statements in the while loop shift the element at nextPos down by gap (one position in the subarray) and reset nextPos to the subscript of the element just moved. LISTING 8.3 ShellSort.java /** Implements the Shell sort algorithm. */ public class ShellSort { /** Sort the table using Shell sort algorithm. @pre table contains Comparable objects. @post table is sorted. @param table The array to be sorted */ public static > void sort(T[] table) { // Gap between adjacent elements. int gap = table.length / 2; while (gap > 0) { for (int nextPos = gap; nextPos < table.length; nextPos++) { // Insert element at nextPos in its subarray. insert(table, nextPos, gap); } // End for. // Reset gap for next pass. if (gap == 2) { gap = 1; } else { gap = (int) (gap / 2.2); } } // End while. } // End sort. /** Inserts element at nextPos where it belongs in array. @pre Elements through nextPos - gap in subarray are sorted. @post Elements through nextPos in subarray are sorted. @param table The array being sorted @param nextPos The position of element to insert @param gap The gap between elements in the subarray */ private static > void insert(T[] table, int nextPos, int gap) { T nextVal = table[nextPos]; // Element to insert. // Shift all values > nextVal in subarray down by gap. while ((nextPos > gap – 1) && (nextVal.compareTo (table [nextPos – gap]) < 0)) { // First element not shifted. table[nextPos] = table[nextPos - gap]; // Shift down. nextPos -= gap; // Check next position in subarray. } table[nextPos] = nextVal; // Insert nextVal. } EXERCISES FOR SECTION 8.5 SELF-CHECK Trace the execution of Shell sort on the following array. Show the array after all sorts when the gap is 5, the gap is 2, and after the final sort when the gap is 1. List the number of comparisons and exchanges required when the gap is 5, the gap is 2 and when the gap is 1. Compare this with the number of comparisons and exchanges that would be required for a regular insertion sort. 40 35 80 75 60 90 70 65 50 22 For the example of Shell sort shown in this section, determine how many comparisons and exchanges are required to insert all the elements for each gap value. Compare this with the number of comparisons and exchanges that would be required for a regular insertion sort. PROGRAMMING Eliminate method insert in Listing 8.3 and write its code inside the for statement. Add statements to trace the progress of Shell sort. Display each value of gap, and display the array contents after all subarrays for that gap value have been sorted. 8.6 Merge Sort The next algorithm that we will consider is called merge sort. A merge is a common data processing operation that is performed on two sequences of data (or data files) with the following characteristics: Both sequences contain items with a common compareTo method. The objects in both sequences are ordered in accordance with this compareTo method (i.e., both sequences are sorted). The result of the merge operation is to create a third sequence that contains all of the objects from the first two sorted sequences. For example, if the first sequence is 3, 5, 8, 15 and the second sequence is 4, 9, 12, 20, the final sequence will be 3, 4, 5, 8, 9, 12, 15, 20. The algorithm for merging the two sequences follows. Merge Algorithm 1. Access the first item from both sequences. 2. while not finished with either sequence 3. Compare the current items from the two sequences, copy the smaller current item to the output sequence, and access the next item from the input sequence whose item was copied. 4. Copy any remaining items from the first sequence to the output sequence. 5. Copy any remaining items from the second sequence to the output sequence. The while loop (Step 2) merges items from both input sequences to the output sequence. The current item from each sequence is the one that has been most recently accessed but not yet copied to the output sequence. Step 3 compares the two current items and copies the smaller one to the output sequence. If input sequence A's current item is the smaller one, the next item is accessed from sequence A and becomes its current item. If input sequence B's current item is the smaller one, the next item is accessed from sequence B and becomes its current item. After the end of either sequence is reached, Step 4 or Step 5 copies the items from the other sequence to the output sequence. Note that either Step 4 or Step 5 is executed, but not both. As an example, consider the sequences shown in Figure 8.4. Steps 2 and 3 will first copy the items from sequence A with the values 244 and 311 to the output sequence; then items from sequence B with values 324 and 415 will be copied; and then the item from sequence A with value 478 will be copied. At this point, we have copied all items in sequence A, so we exit the while loop and copy the remaining items from sequence B (499, 505) to the output (Steps 4 and 5). image FIGURE 8.4 Merge Operation Analysis of Merge For two input sequences that contain a total of n elements, we need to move each element from its input sequence to its output sequence, so the time required for a merge is O(n). How about the space requirements? We need to be able to store both initial sequences and the output sequence. So the array cannot be merged in place, and the additional space usage is O(n). Code for Merge Listing 8.4 shows the merge algorithm applied to arrays of Comparable objects. Algorithm Steps 4 and 5 are implemented as while loops at the end of the method. LISTING 8.4 Merge Method /** Merge two sequences. @pre leftSequence and rightSequence are sorted. @post outputSequence is the merged result and is sorted. @param outputSequence The destination @param leftSequence The left input @param rightSequence The right input */ private static > void merge(T[] outputSequence, T[] leftSequence, T[] rightSequence) { int i = 0; // Index into the left input sequence. int j = 0; // Index into the right input sequence. int k = 0; // Index into the output sequence. // While there is data in both input sequences while (i < leftSequence.length && j < rightSequence.length) { // Find the smaller and // insert it into the output sequence. if (leftSequence[i].compareTo(rightSequence[j]) < 0) { outputSequence[k++] = leftSequence[i++]; } else { outputSequence[k++] = rightSequence[j++]; } } // assert: one of the sequences has more items to copy. // Copy remaining input from left sequence into the output. while (i < leftSequence.length) { outputSequence[k++] = leftSequence[i++]; } // Copy remaining input from right sequence into output. while (j < rightSequence.length) { outputSequence[k++] = rightSequence[j++]; } } image PROGRAM STYLE By using the postincrement operator on the index variables, you can both extract the current item from one sequence and append it to the end of the output sequence in one statement. The statement: outputSequence[k++] = leftSequence[i++]; is equivalent to the following three statements, executed in the order shown: outputSequence[k] = leftSequence[i]; k++; i++; Both the single statement and the group of three statements maintain the invariant that the indexes reference the current item. Algorithm for Merge Sort We can modify merging to serve as an approach to sorting a single, unsorted array as follows: Split the array into two halves. Sort the left half. Sort the right half. Merge the two. What sort algorithm should we use to do Steps 2 and 3? We can use the merge sort algorithm we are developing! The base case will be a table of size 1, which is already sorted, so there is nothing to do for the base case. We write the algorithm next, showing its recursive step. Algorithm for Merge Sort 1. if the tableSize is > 1 2. Set halfSize to tableSize divided by 2. 3. Allocate a table called leftTable of size halfSize. 4. Allocate a table called rightTable of size tableSize – halfSize. 5. Copy the elements from table[0 … halfSize – 1] into leftTable. 6. Copy the elements from table[halfSize … tableSize] into rightTable. 7. Recursively apply the merge sort algorithm to leftTable. 8. Recursively apply the merge sort algorithm to rightTable. 9. Apply the merge method using leftTable and rightTable as the input and the original table as the output. Trace of Merge Sort Algorithm Figure 8.5 illustrates the merge sort. Each recursive call to method sort with an array argument that has more than one element splits the array argument into a left array and a right array, where each new array is approximately half the size of the array argument. We then sort each of these arrays, beginning with the left half, by recursively calling method sort with the left array and right array as arguments. After returning from the sort of the left array and right array at each level, we merge these two halves together back into the space occupied by the array that was split. The left subarray in each recursive call (in gray) will be sorted before the processing of its corresponding right subarray (shaded dark) begins. Lines 4 and 6 merge two one-element arrays to form a sorted two-element array. At line 7, the two sorted two-element arrays (50, 60 and 30, 45) are merged into a sorted four-element array. Next, the right subarray shaded dark on line 1 will be sorted in the same way. When done, the sorted subarray (15, 20, 80, 90) will be merged with the sorted subarray on line 7. image FIGURE 8.5 Trace of Merge Sort Analysis of Merge Sort In Figure 8.5, the size of the arrays being sorted decreases from 8 to 4 (line 1) to 2 (line 2) to 1 (line 3). After each pair of subarrays is sorted, the pair will be merged to form a larger sorted array. Rather than showing a time sequence of the splitting and merging operations, we summarize them as follows: image Lines 1 through 3 show the splitting operations, and lines 4 through 6 show the merge operations. Line 4 shows the two-element arrays formed by merging two-element pairs, line 5 shows the four-element arrays formed by merging two-element pairs, and line 6 shows the sorted array. Because each of these lines involves a movement of n elements from smaller-size arrays to larger arrays, the effort to do each merge is O(n). The number of lines that require merging (three in this case) is log n because each recursive step splits the array in half. So the total effort to reconstruct the sorted array through merging is O(n log n). Recall from our discussion of recursion that whenever a recursive method is called, a copy of the local variables is saved on the run-time stack. Thus, as we go down the recursion chain sorting the leftTables, a sequence of rightTables of size n2,n4,…,n2k is allocated. Since n2+n4+…+2+1=n−1 , a total of n additional storage locations are required. Code for Merge Sort Listing 8.5 shows the MergeSort class. LISTING 8.5 MergeSort.java /** Implements the recursive merge sort algorithm. In this version, copies of the subtables are made, sorted, and then merged. */ public class MergeSort { /** Sort the array using the merge sort algorithm. pre: table contains Comparable objects. post: table is sorted. @param table The array to be sorted */ public static > void sort(T[] table) { // A table with one element is sorted already. if (table.length > 1) { // Split table into halves. int halfSize = table.length / 2; T[] leftTable = (E[]) new Comparable[halfSize]; T[] rightTable = (E[]) new Comparable[table.length – halfSize]; System.arraycopy(table, 0, leftTable, 0, halfSize); System.arraycopy(table, halfSize, rightTable, 0, table.length – halfSize); // Sort the halves. sort(leftTable); sort(rightTable); // Merge the halves. merge(table, leftTable, rightTable); } }     // See Listing 8.4 for the merge method. . . . EXERCISES FOR SECTION 8.6 SELF-CHECK Trace the execution of the merge sort on the following array, providing a figure similar to Figure 8.5. 55 50 10 40 80 90 60 100 70 80 20 50 22 For the array in Question 1 above, show the value of halfSize and arrays leftTable and rightTable for each recursive call to method sort in Listing 8.4 and show the array elements after returning from each call to merge. How many times is sort called, and how many times is merge called? PROGRAMMING Add statements that trace the progress of method sort by displaying the array table after each merge operation. Also display the arrays referenced by leftTable and rightTable. 8.7 Timsort Timsort was developed by Tim Peters in 2002 as the library sort algorithm for the Python language. Timsort is a modification of the merge sort algorithm that takes advantage of sorted subsets that may exist in the data being sorted. The Java 8 API replaced merge sort with Timsort as the sort algorithm to sort lists of objects. Recall that merge sort recursively divides the array in half until there are two sequences of length one. It then merges the adjacent sequences. This is depicted in Figure 8.5. As the algorithm progresses, the run-time stack holds the left-hand sequences that are waiting to be merged with the right-hand sequences. Merge sort starts with two sequences of length 1 and merges them to form a sequence of length 2. It then takes the next two sequences of length 1 and merges them. It then merges the two sequences of length 2 to form a sequence of length 4. The process then repeats building two more sequences of length 2 to create a new sequence of length 4, which is then merged with the previous sequence of length 4 to form a sequence of length 8, and so on. If the data contains subsequences that are already sorted, they are still broken down into sequences of length 1 and then merged back together. Timsort starts by looking for sequences that are already sorted in either ascending or descending order (called a run). A descending sequence is in-place converted to an ascending sequence. After a sequence is identified, it is placed onto a stack. In merge sort, the stack contains sequences of decreasing size such that the sequence at position (i − 2) is twice the length of the sequence at position (i − 1) and four times the length of the sequence at position (i). Thus, the sequence at (i − 2) is longer than the sum of the lengths of the sequences at positions i and (i − 1), and the sequence at position (i − 1) is longer than the sequence at position i. Timsort maintains this same invariant. If the new sequence is at position i on the stack, then if it is shorter than the one at i − 1 and if the sum of the lengths of the new item and the one at i − 1 is smaller than the one at i − 2, then we leave the new one on the stack and look for the next sequence. However, if this invariant is violated by the addition of the new sequence, the invariant is restored by merging the shorter of the sequence at i or i − 2 with the sequence at i − 1. After the merge is completed, the invariant is again checked and the process repeats until the invariant is restored. Once all of the runs have been identified, the stack is collapsed by repeatedly merging the top two items on the stack. For example, consider the following input data: [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] 1 3 5 7 9 8 6 4 12 14 15 16 17 11 10 The stack will contain the start position of the first item in a run and its length. The run from 0 to 4 is identified and is paced onto the stack at stack index 0. Index Start Length 0 0 5 The next run (from 5 to 7) shown earlier is a descending sequence. It is in-place converted to an ascending sequence as shown next [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] 1 3 5 7 9 4 6 8 12 14 15 16 17 11 10 and it is placed on the stack at stack index 1. Index Start Length 0 0 5 1 5 3 Since the new sequence is shorter than the sequence below it in the stack, we look for the next sequence. This sequence is from 8 to 12. It is an ascending sequence and is placed onto the stack. Index Start Length 0 0 5 1 5 3 2 8 5 This sequence is longer than the one below it. The sequence at 0 is shorter than the combined length of the sequences at 1 and 2. Since the length of the sequence at 0 on the stack is less than or equal to the length of the sequence at 2, we merge the sequences at 0 and 1. [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] 1 3 4 5 6 7 8 9 12 14 15 16 17 11 10 The stack now is Index Start Length 0 0 8 1 8 5 and the invariant is restored. Now the next sequence is identified from 13 to 14. It is a descending sequence, so it is in-place converted to an ascending sequence. [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] 1 3 4 5 6 7 8 9 12 14 15 16 17 10 11 The stack is now Index Start Length 0 0 8 1 8 5 2 13 2 There are no more sequences, so we finish the merge process by merging the sequences on the stack, starting with the last two sequences placed on the stack, until there is only one sequence left. [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] 1 3 4 5 6 7 8 9 10 11 12 14 15 16 17 Index Start Length 0 0 8 1 8 7 [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] 1 3 4 5 6 7 8 9 10 11 12 14 15 16 17 Index Start Length 0 0 15 Observe that the last two sequences were already sorted (the numbers at array element [7] is smaller than the number at array element [8]), so there was no actual merging. The Timsort merge algorithm also takes advantage of this by finding the first item in the left sequence that is greater than the first item in the right sequence. It also finds the last item in the right sequence that is less than the last item in the right sequence. The resulting shortened sequences are then merged. Algorithm for Timsort 1. Set lo to zero. 2. while lo is less than the length of the array 3. Find the length of the run starting at lo. 4. If this is a decreasing run, reverse it. 5. Add this run to the stack. 6. Set lo to lo + length of the run. 7. Collapse-merge the stack. 8. Force-merge the stack. Algorithm to Find a Run 1. Set hi to lo + 1. 2. if a[hi] < a[lo] 3.    while hi < a.length and a[hi] < a[lo] 4.   increment hi    else 5.    while hi < a.length and a[hi] ≥ a[lo] 6.   increment hi 7. Return hi – lo as the length of the run. Algorithm to Reverse a Run 1. Set i to the beginning of the run. 2. Set j to the end of the run (i + length-1). 3. while i < j 4. swap a[i] and a[j] 5. increment i 6. decrement j Algorithm for Collapse-Merge 1. while the stack size is > 1 2. Let top be the index of the top of the stack. 3.    if top > 1 and stack[top-2].length ≤ stack[top-1].length + stack[top].length 4. if stack[top-2].length < stack[top].length 5.   Merge the runs at stack[top-2] and stack[top-1]. else 6.   Merge the runs at stack[top-1] and stack[top]. 7.  else if stack[top-1].length ≤ stack[top].length 8.  Merge the runs at stack[top-1] and stack[top].   else 9.   Exit the while loop. Algorithm for Force-Merge 1. while the stack size is > 1 2. Let top be the index of the top of the stack. 3.  if top > 1 and stack[top-2].length < stack[top].length 4. Merge the runs at stack[top-2] and stack[top-1]. else 5. Merge the runs at stack[top-1] and stack[top]. Merging Adjacent Sequences The actual merging of the sequences is the same as in merge sort. However, to avoid unnecessary copying of data to the temporary arrays, the start of the left sequence is adjusted to the first item that is greater than the first item of the right sequence. Likewise, the end of the right sequence is adjusted to be the first item that is less than the last item in the left sequence. Implementation Listing 8.6 shows the implementation of Timsort. An internal class, TS, is used to hold the references to the array being sorted and the stack. An ArrayList is used for the stack because we need to access the top three items. The internal class run is used to represent the runs. LISTING 8.6 Timsort.java /** A simplified version of the Timsort algorithm. * @author Koffman & Wolfgang */ public class TimSort implements SortAlgorithm { /** Sort the array using the Timsort algorithm * @pre table contains Comparable objects. * @post table is sorted. * @param table The array to be sorted */ @Override public > void sort(T[] table) { new TS(table).sort(); } /** Private inner class to hold the working state of * the algorithm. */ private static class TS> { /** Private inner class to hold definitions * of the runs */ private static class Run { int startIndex; int length; Run(int startIndex, int length) { this.startIndex = startIndex; this.length = length; } } // Array of runs that are pending merging. List runStack; // Reference to the array being sorted. T[] table; /** constructor * @param table array to be sorted */ public TS(T[] table) { this.table = table; runStack = new ArrayList<>(); } /** Sort the array using the Timsort algorithm. * @pre table contains Comparable objects. * @post table is sorted. */ public void sort() { int nRemaining = table.length; if (nRemaining < 2) { // Single item array is already sorted. return; } int lo = 0; do { int runLength = nextRun(lo); runStack.add(new Run(lo, runLength)); mergeCollapse(); lo += runLength; nRemaining -= runLength; } while (nRemaining != 0); mergeForce(); } /** Method to find the length of the next run. A * run is a sequence of ascending items such that * a[i] <= a[i+1] or descending items such * that a[i] > a[i+1]. * If a descending sequence is found, it is turned * into an ascending sequence. * @param table The table being sorted * @param lo The index where the sequence starts * @return the length of the sequence. */ private int nextRun(int lo) { if (lo == table.length – 1) { return 1; } int hi = lo + 1; if (table[hi – 1].compareTo(table[hi]) <= 0) { while (hi < table.length && table[hi - 1].compareTo(table[hi]) <= 0) { hi++; } } else { while (hi < table.length && table[hi - 1].compareTo(table[hi]) > 0) { hi++; } swapRun(lo, hi – 1); } return hi – lo; } /** Method to convert a descending sequence into * an ascending sequence. * @param table The table being sorted * @param lo The start index * @param hi The end index */ private void swapRun(int lo, int hi) { while (lo < hi) { swap(lo++, hi--); } } /** Swap the items in table[i] and table[j]. * @param table The array that contains the items * @param i The index of one item * @param j The index of the other item */ private void swap(int i, int j) { T temp = table[i]; table[i] = table[j]; table[j] = temp; } /** Merge adjacent runs until the following * invariant is established. * 1. runLength[top - 2] > runLenght[top – 1] + runLength[top] * 2. runLength[top – 1] > runLength[top] * This method is called each time a new run is * added to the stack. * Invariant is true before a new run is added to * the stack. */ private void mergeCollapse() { while (runStack.size() > 1) { int top = runStack.size() – 1; if (top > 1 && runStack.get(top – 2).length <= runStack.get(top - 1).length + runStack.get(top).length) { if (runStack.get(top - 2).length < runStack.get(top).length) { mergeAt(top - 2); } else { mergeAt(top - 1); } } else if (runStack.get(top - 1).length <= runStack.get(top).length) { mergeAt(top - 1); } else { break; } } } /** Merge all remaining runs. This method is called * to complete the sort. */ private void mergeForce() { while (runStack.size() > 1) { int top = runStack.size() – 1; if (top > 1 && runStack.get(top – 2).length < runStack.get(top).length) { mergeAt(top - 2); } else { mergeAt(top - 1); } } } /* Merge the two adjacent runs at i and i+1. i must be equal to runStack.size() - 2 or runStack.size() - 3. */ private void mergeAt(int i) { int base1 = runStack.get(i).startIndex; int len1 = runStack.get(i).length; int base2 = runStack.get(i + 1).startIndex; int len2 = runStack.get(i + 1).length; runStack.set(i, new Run(base1, len1 + len2)); if (i == runStack.size() - 3) { runStack.set(i + 1, runStack.get(i + 2)); } runStack.remove(runStack.size() - 1); int newBase1 = reduceLeft(base1, base2); len1 = len1 - (newBase1 - base1); if (len1 > 0) { len2 = reduceRight(newBase1, len1, base2, len2); if (len2 > 0) { T[] run1 = (T[]) (new Comparable[len1]); T[] run2 = (T[]) (new Comparable[len2]); System.arraycopy(table, newBase1, run1, 0, len1); System.arraycopy(table, base2, run2, 0, len2); merge(newBase1, run1, run2); } } } /** Adjust the start of run1 so that its first * element is greater than or equal the first element of run2 * @param base1 The index of the start of run1 * @param base2 The index of the start of run2 * @return the new start of run 1 */ int reduceLeft(int base1, int base2) { T base2Start = table[base2]; while (table[base1].compareTo(base2Start) < 0) { base1++; } return base1; } /** Adjust the end of run2 so that its last element * is less than or equal to the last element of run1 * @param base1 The start of run 1 * @param len1 The length of run 1 * @param base2 The start of run 2 * @param len2 The length of run 2 * @return the new length of run 2 */ int reduceRight(int base1, int len1, int base2, int len2) { T run1End = table[base1 + len1 - 1]; while (table[base2 + len2 - 1].compareTo(run1End) > 0) { len2–; } return len2; } /** Merge two runs into the table * @param destIndex Index where the merged run is to be inserted * @param run1 Array containing one run * @param run2 Array containing the other run */ private void merge(int destIndex, T[] run1, T[] run2) { int i = 0; int j = 0; while (i < run1.length && j < run2.length) { if (run1[i].compareTo(run2[j]) < 0) { table[destIndex++] = run1[i++]; } else { table[destIndex++] = run2[j++]; } } while (i < run1.length) { table[destIndex++] = run1[i++]; } while (j < run2.length) { table[destIndex++] = run2[j++]; } } } 8.8 Heapsort The merge sort algorithm has the virtue that its time is O(n log n), but it still requires, at least temporarily, n extra storage locations. This next algorithm can be implemented without requiring any additional storage. It uses a heap to store the array and so is called heapsort. First Version of a Heapsort Algorithm We introduced the heap in Section 6.6. When used as a priority queue, a heap is a data structure that maintains the smallest value at the top. The following algorithm first places an array's data into a heap. Then it removes each heap item (an O(log n) process) and moves it back into the array. Heapsort Algorithm: First Version 1. Insert each value from the array to be sorted into a priority queue (heap). 2. Set i to 0. 3. while the priority queue is not empty 4.   Remove an item from the queue and insert it back into the array at position i. 5.   Increment i. Although this algorithm can be shown to be O(n log n), it does require n extra storage locations (the array and heap are both size n). We address this problem next. Revising the Heapsort Algorithm In the heaps we have used so far, each parent node value was less than the values of its children. We can also build the heap so that each parent is larger than its children. Figure 8.6 shows an example of such a heap. image FIGURE 8.6 Example of a Heap with Largest Value in Root Once we have such a heap, we can remove one item at a time from the heap. The item removed is always the top element, and it will end up at the bottom of the heap. When we reheap, we move the larger of a node's two children up the heap, instead of the smaller, so the next largest item is then at the top of the heap. Figure 8.7 shows the heap after we have removed one item, and Figure 8.8 shows the heap after we have removed two items. In both figures, the items in bold have been removed from the heap. As we continue to remove items from the heap, the heap size shrinks as the number of the removed items increases. Figure 8.9 shows the heap after we have emptied it. image FIGURE 8.7 Heap after Removal of Largest Item image FIGURE 8.8 Heap after Removal of Two Largest Items image FIGURE 8.9 Heap after Removal of All Its Items If we implement the heap using an array, each element removed will be placed at the end of the array but in front of the elements that were removed earlier. After we remove the last element, the array will be sorted. We illustrate this next. Figure 8.10 shows the array representation of the original heap. As before, the root, 89, is at position 0. The root's two children, 76 and 74, are at positions 1 and 2. For a node at position p, the left child is at 2p + 1 and the right child is at 2p + 2. A node at position c can find its parent at (c – 1) / 2. image FIGURE 8.10 Internal Representation of the Heap Shown in Figure 8.6 Figure 8.11 shows the array representation of the heaps in Figures 8.7 through Figures 8.9. The items in gray have been removed from the heap and are sorted. Each time an item is removed, the heap part of the array decreases by one element and the sorted part of the array increases by one element. In the array at the bottom of Figure 8.11, all items have been removed from the heap and the array is sorted. image FIGURE 8.11 Internal Representation of the Heaps Shown in Figures 8.7 through 8.9 From our foregoing observations, we can sort the array that represents a heap in the following way. Algorithm for In-Place Heapsort 1. Build a heap by rearranging the elements in an unsorted array. 2. while the heap is not empty 3. Remove the first item from the heap by swapping it with the last item in the heap and restoring the heap property. Each time through the loop (Steps 2 and 3), the largest item remaining in the heap is placed at the end of the heap, just before the previously removed items. Thus, when the loop terminates, the items in the array are sorted. In Section 6.6, we discussed how to remove an item from a heap and restore the heap property. We also implemented a remove method for a heap in an ArrayList. Algorithm to Build a Heap Step 1 of the algorithm builds a heap. If we start with an array, table, of length table.length, we can consider the first item (index 0) to be a heap of one item. We now consider the general case where the items in array table from 0 through n - 1 form a heap; the items from n through table.length - 1 are not in the heap. As each is inserted, we must “reheap” to restore the heap property. Refinement of Step 1 for In-Place Heapsort 1.1 while n is less than table.length 1.2 Increment n by 1. This inserts a new item into the heap. 1.3 Restore the heap property. Analysis of Revised Heapsort Algorithm From our knowledge of binary trees, we know that a heap of size n has log n levels. Building a heap requires finding the correct location for an item in a heap with log n levels. Because we have n items to insert and each insert (or remove) is O(log n), the buildHeap operation is O(n log n). Similarly, we have n items to remove from the heap, so that is also O(n log n). Because we are storing the heap in the original array, no extra storage is required. Code for Heapsort Listing 8.7 shows the HeapSort class. The sort method merely calls the buildHeap method followed by the shrinkHeap method, which is based on the remove method shown in Section 6.6. Method swap swaps the items in the table. LISTING 8.7 HeapSort.java /** Implementation of the heapsort algorithm. */ public class HeapSort { /** Sort the array using heapsort algorithm. @pre table contains Comparable items. @post table is sorted. @param table The array to be sorted */ public static > void sort(T[] table) { buildHeap(table); shrinkHeap(table); } /** buildHeap transforms the table into a heap. @pre The array contains at least one item. @post All items in the array are in heap order. @param table The array to be transformed into a heap */ private static > void buildHeap(T[] table) { int n = 1; // Invariant: table[0 . . . n – 1] is a heap. while (n < table.length) { n++; // Add a new item to the heap and reheap. int child = n - 1; int parent = (child - 1) / 2; // Find parent. while (parent >= 0 && table[parent].compareTo(table[child]) < 0) { swap(table, parent, child); child = parent; parent = (child - 1) / 2; } } } /** shrinkHeap transforms a heap into a sorted array. @pre All items in the array are in heap order. @post The array is sorted. @param table The array to be sorted */ private static > void shrinkHeap(T[] table) { int n = table.length; // Invariant: table[0 . . . n – 1] forms a heap. // table[n . . . table.length – 1] is sorted. while (n > 0) { n–; swap(table, 0, n); // table[1 . . . n – 1] form a heap. // table[n . . . table.length – 1] is sorted. int parent = 0; while (true) { int leftChild = 2 * parent + 1; if (leftChild >= n) { break; // No more children. } int rightChild = leftChild + 1; // Find the larger of the two children. int maxChild = leftChild; if (rightChild < n // There is a right child. && table[leftChild].compareTo(table[rightChild]) < 0) { maxChild = rightChild; } // If the parent is smaller than the larger child, if (table[parent].compareTo(table[maxChild]) < 0) { // Swap the parent and child. swap(table, parent, maxChild); // Continue at the child level. parent = maxChild; } else { // Heap property is restored. break; // Exit the loop. } } } } /** Swap the items in table[i] and table[j]. @param table The array that contains the items @param i The index of one item @param j The index of the other item */ private static > void swap(T[] table, int i, int j) { T temp = table[i]; table[i] = table[j]; table[j] = temp; } } EXERCISES FOR SECTION 8.8 SELF-CHECK Build the heap from the numbers in the following list. How many exchanges were required? How many comparisons? 55 50 10 40 80 90 60 100 70 80 20 50 22 Shrink the heap from Question 1 to create the array in sorted order. How many exchanges were required? How many comparisons? 8.9 Quicksort The next algorithm we will study is called quicksort. Developed by C. A. R. Hoare in 1962, it works in the following way: given an array with subscripts first … last to sort, quicksort rearranges this array into two parts so that all the elements in the left subarray are less than or equal to a specified value (called the pivot) and all the elements in the right subarray are greater than the pivot. The pivot is placed between the two parts. Thus, all of the elements on the left of the pivot value are smaller than all elements on the right of the pivot value, so the pivot value is in its correct position. By repeating this process on the two halves, the whole array becomes sorted. As an example of this process, let’s sort the following array:    image We will assume that the first array element (44) is arbitrarily selected as the pivot value. A possible result of rearranging, or partitioning, the element values follows: image After the partitioning process, the pivot value, 44, is at its correct position. All values less than 44 are in the left subarray, and all values larger than 44 are in the right subarray, as desired. The next step would be to apply quicksort recursively to the two subarrays on either side of the pivot value, beginning with the left subarray (12, 33, 23, 43). Here is the result when 12 is the pivot value: image The pivot value is in the first position. Because the left subarray does not exist, the right subarray (33, 23, 43) is sorted next, resulting in the following situation: image The pivot value 33 is in its correct place, and the left subarray (23) and right subarray (43) have single elements, so they are sorted. At this point, we are finished sorting the left part of the original subarray, and quicksort is applied to the right subarray (55, 64, 77, 75). In the following array, all the elements that have been placed in their proper position are shaded dark. image If we use 55 for the pivot, its left subarray will be empty after the partitioning process and the right subarray 64, 77, 75 will be sorted next. If 64 is the pivot, the situation will be as follows, and we sort the right subarray (77, 75) next. image If 77 is the pivot and we move it where it belongs, we end up with the following array. Because the left subarray (75) has a single element, it is sorted and we are done. image Algorithm for Quicksort The algorithm for quicksort follows. We will describe how to do the partitioning later. We assume that the indexes first and last are the endpoints of the array being sorted and that the index of the pivot after partitioning is pivIndex. Algorithm for Quicksort 1. if first < last then 2. Partition the elements in the subarray first . . . last so that the pivot value is in its correct place (subscript pivIndex). 3. Recursively apply quicksort to the subarray first . . . pivIndex - 1. 4. Recursively apply quicksort to the subarray pivIndex + 1 . . . last. Analysis of Quicksort If the pivot value is a random value selected from the current subarray, then statistically it is expected that half of the items in the subarray will be less than the pivot and half will be greater than the pivot. If both subarrays always have the same number of elements (the best case), there will be log n levels of recursion. At each level, the partitioning process involves moving every element into its correct partition, so quicksort is O(n log n), just like merge sort. But what if the split is not 50–50? Let us consider the case where each split is 90–10. Instead of a 100-element array being split into two 50-element arrays, there will be one array with 90 elements and one with just 10. The 90-element array may be split 50–50, or it may also be split 90–10. In the latter case, there would be one array with 81 elements and one with just 9 elements. Generally, for random input, the splits will not be exactly 50–50, but neither will they all be 90–10. An exact analysis is difficult and beyond the scope of this book, but the running time will be bound by a constant × n log n. There is one situation, however, where quicksort gives very poor behavior. If, each time we partition the array, we end up with a subarray that is empty, the other subarray will have one less element than the one just split (only the pivot value will be removed). Therefore, we will have n levels of recursive calls (instead of log n), and the algorithm will be O(n2). Because of the overhead of recursive method calls (versus iteration), quicksort will take longer and require more extra storage on the run-time stack than any of the earlier quadratic algorithms. We will discuss a way to handle this situation later. Code for Quicksort Listing 8.8 shows the QuickSort class. The public method sort calls the recursive quickSort method, giving it the bounds of the table as the initial values of first and last. The two recursive calls in quickSort will cause the procedure to be applied to the subarrays that are separated by the value at pivIndex. If any subarray contains just one element (or zero elements), an immediate return will occur. LISTING 8.8 QuickSort.java /** Implements the quicksort algorithm. */ public class QuickSort { /** Sort the table using the quicksort algorithm. @pre table contains Comparable objects. @post table is sorted. @param table The array to be sorted */ public static > void sort(T[] table) { // Sort the whole table. quickSort(table, 0, table.length – 1); } /** Sort a part of the table using the quicksort algorithm. @post The part of table from first through last is sorted. @param table The array to be sorted @param first The index of the low bound @param last The index of the high bound */ private static > void quickSort(T[] table, int first, int last) { if (first < last) { // There is data to be sorted. // Partition the table. int pivIndex = partition(table, first, last); // Sort the left half. quickSort(table, first, pivIndex - 1); // Sort the right half. quickSort(table, pivIndex + 1, last); } } // Insert partition method. See Listing 8.9 . . . } Algorithm for Partitioning The partition method selects the pivot and performs the partitioning operation. When we are selecting the pivot, it does not really matter which element is the pivot value (if the arrays are randomly ordered to begin with). For simplicity we chose the element with subscript first. We then begin searching for the first value at the left end of the subarray that is greater than the pivot value. When we find it, we search for the first value at the right end of the subarray that is less than or equal to the pivot value. These two values are exchanged, and we repeat the search and exchange operations. This is illustrated in Figure 8.12, where up points to the first value greater than the pivot and down points to the first value less than or equal to the pivot value. The elements less than the pivot are shaded dark, and the elements greater than the pivot are in gray. image FIGURE 8.12 Locating First Values to Exchange The value 75 is the first value at the left end of the array that is larger than 44, and 33 is the first value at the right end that is less than or equal to 44, so these two values are exchanged. The indexes up and down are advanced again, as shown in Figure 8.13. image FIGURE 8.13 Array after the First Exchange The value 55 is the next value at the left end that is larger than 44, and 12 is the next value at the right end that is less than or equal to 44, so these two values are exchanged, and up and down are advanced again, as shown in Figure 8.14. image FIGURE 8.14 Array after the Second Exchange After the second exchange, the first five array elements contain the pivot value and all values less than or equal to the pivot; the last four elements contain all values larger than the pivot. The value 55 is selected once again by up as the next element larger than the pivot; 12 is selected by down as the next element less than or equal to the pivot. Since up has now “passed” down, these values are not exchanged. Instead, the pivot value (subscript first) and the value at position down are exchanged. This puts the pivot value in its proper position (the new subscript is down) as shown in Figure 8.15. image FIGURE 8.15 Array after the Pivot Is Inserted The partition process is now complete, and the value of down is returned to the pivot index pivIndex. Method quickSort will be called recursively to sort the left subarray and the right subarray. The algorithm for partition follows: Algorithm for partition Method 1. Define the pivot value as the contents of table[first]. 2. Initialize up to first and down to last. 3. do 4. Increment up until up selects the first element greater than the pivot value or up has reached last. 5. Decrement down until down selects the first element less than or equal to the pivot value or down has reached first. 6. if up < down then 7. Exchange table[up] and table[down]. 8. while up is to the left of down 9. Exchange table[first] and table[down]. 10. Return the value of down to pivIndex. Code for partition The code for partition is shown in Listing 8.9. The while statement: while ((up < last) && (pivot.compareTo(table[up]) >= 0)) { up++; } advances the index up until it is equal to last or until it references an item in table that is greater than the pivot value. Similarly, the while statement: while (pivot.compareTo(table[down]) < 0)) { down--; } moves the index down until it references an item in table that is less than or equal to the pivot value. The do–while condition (up < down) ensures that the partitioning process will continue while up is to the left of down. What happens if there is a value in the array that is the same as the pivot value? The index down will stop at such a value. If up has stopped prior to reaching that value, table[up] and table[down] will be exchanged, and the value equal to the pivot will be in the left partition. If up has passed this value and therefore passed down, table[first] will be exchanged with table[down] (same value as table[first]), and the value equal to the pivot will still be in the left partition. What happens if the pivot value is the smallest value in the array? Since the pivot value is at table[first], the loop will terminate with down equal to first. In this case, the left partition is empty. Figure 8.16 shows an array for which this is the case. image FIGURE 8.16 Values of up, down, and pivIndex If the Pivot Is the Smallest Value By similar reasoning, we can show that up will stop at last if there is no element in the array larger than the pivot. In this case, down will also stay at last, and the pivot value (table[first]) will be swapped with the last value in the array, so the right partition will be empty. Figure 8.17 shows an array for which this is the case. image FIGURE 8.17 Values of up, down, and pivIndex If the Pivot Is the Largest Value LISTING 8.9 Quicksort partition Method (First Version) /** Partition the table so that values from first to pivIndex are less than or equal to the pivot value, and values from pivIndex to last are greater than the pivot value. @param table The table to be partitioned @param first The index of the low bound @param last The index of the high bound @return The location of the pivot value */ private static > int partition(T[] table, int first, int last) { // Select the first item as the pivot value. T pivot = table[first]; int up = first; int down = last; do { /* Invariant: All items in table[first . . . up – 1] <= pivot All items in table[down + 1 . . . last] > pivot */ while ((up < last) && (pivot.compareTo(table[up]) >= 0)) { up++; } // assert: up equals last or table[up] > pivot. while (pivot.compareTo(table[down]) < 0) { down--; } // assert: down equals first or table[down] <= pivot. if (up < down) { // if up is to the left of down. // Exchange table[up] and table[down]. swap(table, up, down); } } while (up < down); // Repeat while up is left of down. // Exchange table[first] and table[down] thus putting the // pivot value where it belongs. swap(table, first, down); // Return the index of the pivot value. return down; } A Revised partition Algorithm We stated earlier that quicksort is O(n2) when each split yields one empty subarray. Unfortunately, that would be the case if the array was sorted. So the worst possible performance occurs for a sorted array, which is not very desirable. A better solution is to pick the pivot value in a way that is less likely to lead to a bad split. One approach is to examine the first, middle, and last elements in the array and select the median of these three values as the pivot. We can do this by sorting the three-element subarray (shaded dark in Figure 8.18). After sorting, the smallest of the three values is in position first, the median is in position middle, and the largest is in position last. image FIGURE 8.18 Sorting First, Middle, and Last Elements in Array At this point, we can exchange the first element with the middle element (the median) and use the partition algorithm shown earlier, which uses the first element (now the median) as the pivot value. When we exit the partitioning loop, table[first] and table[down] are exchanged, moving the pivot value where it belongs (back to the middle position). This revised partition algorithm follows. Algorithm for Revised partition Method 1. Sort table[first], table[middle], and table[last]. 2. Move the median value to table[first] (the pivot value) by exchanging table[first] and table[middle]. 3. Initialize up to first and down to last. 4. do 5. Increment up until up selects the first element greater than the pivot value or up has reached last. 6. Decrement down until down selects the first element less than or equal to the pivot value or down has reached first. 7. if up < down then 8. Exchange table[up] and table[down]. 9. while up is to the left of down. 10. Exchange table[first] and table[down]. 11. Return the value of down to pivIndex. You may be wondering whether you can avoid the double shift (Steps 2 and 10) and just leave the pivot value at table[middle], where it belongs. The answer is “yes,” but you would also need to modify the partition algorithm further if you did this. Programming Project 6 addresses this issue and the construction of an industrial-strength quicksort method. Code for Revised partition Method Listing 8.10 shows the revised version of method partition with method sort3, which uses three pairwise comparisons to sort the three selected items in table so that table[first] <= table[middle] <= table[last] Method partition begins with a call to method sort3 and then calls swap to make the median the pivot. The rest of the method is unchanged. LISTING 8.10 Revised partition Method and sort3 private static > int partition(T[] table, int first, int last) { /* Put the median of table[first], table[middle], table[last] into table[first], and use this value as the pivot. */ sort3(table, first, last); // Swap first element with median. swap(table, first, (first + last) / 2); // Continue as in Listing 8.9 // . . . } /** Sort table[first], table[middle], and table[last]. @param table The table to be sorted @param first Index of the first element @param last Index of the last element */ private static > sort3(T[] table, int first, int last) { int middle = (first + last) / 2; /* Sort table[first], table[middle], table[last]. */ if (table[middle].compareTo(table[first]) < 0) { swap(table, first, middle); } // assert: table[first] <= table[middle] if (table[last].compareTo(table[middle]) < 0) { swap(table, middle, last); } // assert: table[last] is the largest value of the three. if (table[middle].compareTo(table[first]) < 0) { swap(table, first, middle); } // assert: table[first] <= table[middle] <= table[last]. } image PITFALL Falling Off Either End of the Array A common problem when incrementing up or down during the partition process is falling off either end of the array. This will be indicated by an ArrayIndexOutOfBoundsException. We used the condition ((up < last) && (pivot.compareTo(table[up]) >= 0)) to keep up from falling off the right end of the array. Self-Check Exercise 3 asks why we don’t need to write similar code to avoid falling off the left end of the array. EXERCISES FOR SECTION 8.9 SELF-CHECK 1. Trace the execution of quicksort on the following array, assuming that the first item in each subarray is the pivot value. Show the values of first and last for each recursive call and the array elements after returning from each call. Also, show the value of pivot during each call and the value returned through pivIndex. How many times is sort called, and how many times is partition called? 55 50 10 40 80 90 60 100 70 80 20 50 22 2. Redo Question 1 above using the revised partition algorithm, which does a preliminary sort of three elements and selects their median as the pivot value. 3. Explain why the condition (down > first) is not necessary in the loop that decrements down. PROGRAMMING Insert statements to trace the quicksort algorithm. After each call to partition, display the values of first, pivIndex, and last and the array. 8.10 Testing the Sort Algorithms To test the sorting algorithms, we need to exercise them with a variety of test cases. We want to make sure that they work and also want to get some idea of their relative performance when sorting the same array. We should test the methods with small arrays, large arrays, arrays whose elements are in random order, arrays that are already sorted, and arrays with duplicate copies of the same value. For performance comparisons to be meaningful, the methods must sort the same arrays. Listing 8.11 shows a driver program that tests methods Arrays.Sort (from the API java.util) and QuickSort.sort on the same array of random integer values. Method System.currentTimeMillis returns the current time in milliseconds. This method is called just before a sort begins and just after the return from a sort. The elapsed time between calls is displayed in the console window. Although the numbers shown will not be precise, they give a good indication of the relative performance of two sorting algorithms if this is the only application currently executing. Method verify verifies that the array elements are sorted by checking that each element in the array is not greater than its successor. Method dumpTable (not shown) should display the first 10 elements and last 10 elements of an array (or the entire array if the array has 20 or fewer elements). LISTING 8.11 Driver to Test Sort Algorithms /** Driver program to test sorting methods. @param args Not used */ public static void main(String[] args) { int size = Integer.parseInt(JOptionPane.showInputDialog(“Enter Array size:”)); Integer[] items = new Integer[size]; // Array to sort. Integer[] copy = new Integer[size]; // Copy of array. Random rInt = new Random(); // For random number generation // Fill the array and copy with random Integers. for (int i = 0; i < items.length; i++) { items[i] = rInt.nextInt(); copy[i] = items[i]; } // Sort with utility method. long startTime = System.currentTimeMillis(); Arrays.sort(items); System.out.println("Utility sort time is " + (System.currentTimeMillis() - startTime) + "ms"); System.out.println("Utility sort successful (true/false): " + verify(items)); // Reload array items from array copy. for (int i = 0; i < items.length; i++) { items[i] = copy[i]; } // Sort with quicksort. startTime = System.currentTimeMillis(); QuickSort.sort(items); System.out.println("QuickSort time is " + (System.currentTimeMillis() - startTime) + "ms"); System.out.println("QuickSort successful (true/false): " + verify(items)); dumpTable(items); // Display part of the array. } /** Verifies that the elements in array test are in increasing order. @param test The array to verify @return true if the elements are in increasing order; false if any 2 elements are not in increasing order */ private static boolean verify(Comparable[] test) { boolean ok = true; int i = 0; while (ok && i < test.length - 1) { ok = test[i].compareTo(test[i + 1]) <= 0; i++; } return ok; } EXERCISES FOR SECTION 8.10 SELF-CHECK Explain why method verify will always determine whether an array is sorted. Does verify work if an array contains duplicate values? Explain the effect of removing the second for statement in the main method. PROGRAMMING Write method dumpTable. Modify the driver method to fill array items with a collection of integers read from a file when args[0] is not null. Extend the driver to test all O(n log n) sorts and collect statistics on the different sorting algorithms. Test the sorts using an array of random numbers and also a data file processed by the solution to Programming Exercise 2. 8.11 The Dutch National Flag Problem (Optional Topic) A variety of partitioning algorithms for quicksort have been published. Most are variations on the one presented in this text. There is another popular variation that uses a single left-to-right scan of the array (instead of scanning left and scanning right as we did). The following case study illustrates a partitioning algorithm that combines both scanning techniques to partition an array into three segments. The famous computer scientist Edsger W. Dijkstra described this problem in his book A Discipline of Programming (Prentice-Hall, 1976). CASE STUDY The Problem of the Dutch National Flag Problem The Dutch national flag consists of three stripes that are colored (from top to bottom) red, white, and blue. In Figure 8.19 we use dark gray for blue and light gray for red. Unfortunately, when the flag arrived, it looked like Figure 8.20; threads of each of the colors were all scrambled together! Fortunately, we have a machine that can unscramble it, but it needs software. image FIGURE 8.19 The Dutch National Flag image FIGURE 8.20 Scrambled Dutch National Flag Analysis Our unscrambling machine has the following abilities: It can look at one thread in the flag and determine its color. It can swap the position of two threads in the flag. Our machine can also execute while loops and if statements. Design Loop Invariant When we partitioned the array in quicksort, we split the array into three regions. Values between first and up were less than or equal to the pivot; values between down and last were greater than the pivot; and values between up and down were unknown. We started with the unknown region containing the whole array (first == up, and down == last). The partitioning algorithm preserves this invariant while shrinking the unknown region. The loop terminates when the unknown region becomes empty (up > down). image Since our goal is to have three regions when we are done, let us define four regions: the red region, the white region, the blue region, and the unknown region. Now, initially the whole flag is unknown. When we get done, however, we would like the red region on top, the white region in the middle, and the blue region on the bottom. The unknown region must be empty. Let us assume that the threads are stored in an array threads and that the total number of threads is HEIGHT. Let us define red to be the upper bound of the red region, white to be the lower bound of the white region, and blue to be the lower bound of the blue region. Then, if our flag is complete, we can say the following: If 0 ≤ i < red, then threads[i] is red. If white < i ≤ blue, then threads[i] is white. If blue < i < HEIGHT, then threads[i] is blue. What about the case where red ≤ i ≤ white? When the flag is all sorted, red should equal white, so this region should not exist. However, when we start, everything is in this region, so a thread in that region can have any color. Thus, we can define the following loop invariant: If 0 ≤ i < red, then threads[i] is red. If red ≤ i ≤ white, then the color is unknown. If white < i ≤ blue, then threads[i] is white. If blue < i < HEIGHT, then threads[i] is blue. This is illustrated in Figure 8.21. image FIGURE 8.21 Dutch National Flag Loop Invariant Algorithm We can solve our problem by establishing the loop invariant and then executing a loop that both preserves the loop invariant and shrinks the unknown region. 1. Set red to 0, white to HEIGHT – 1, and blue to HEIGHT - 1. This establishes our loop invariant with the unknown region the whole flag and the red, white, and blue regions empty. 2. while red < white 3. Shrink the distance between red and white while preserving the loop invariant. Preserving the Loop Invariant Let us assume that we now know the color of threads[white] (the thread at position white). Our goal is to either leave threads[white] where it is (in the white region if it is white) or “move it” to the region where it belongs. There are three cases to consider: Case 1: The color of threads[white] is white. In this case, we merely decrement the value of white to restore the invariant. By doing so, we increase the size of the white region by one thread. Case 2: The color of threads[white] is red. We know from our invariant that the color of threads[red] is unknown. Therefore, if we swap the thread at threads[red] with the one at threads[white], we can then increment the value of red and preserve the invariant. By doing this, we add the thread to the end of the red region and reduce the size of the unknown region by one thread. Case 3: The color of threads[white] is blue. We know from our invariant that the color of threads[blue] is white. Thus, if we swap the thread at threads[white] with the thread at threads[blue] and then decrement both white and blue, we preserve the invariant. By doing this, we insert the thread at the beginning of the blue region and reduce the size of the unknown region by one thread. Implementation A complete implementation of this program is left as a programming project. We show the coding of the sort algorithm in Listing 8.12. LISTING 8.12 Dutch National Flag Sort public void sort() { int red = 0; int white = height - 1; int blue = height - 1; /* Invariant: 0 <= i < red ==>threads[i].getColor() == Color.RED red <= i <= white ==>threads[i].getColor() is unknown white < i < blue ==>threads[i].getColor() == Color.WHITE blue < i < height ==>threads[i].getColor() == Color.BLUE */ while (red <= white) { if (threads[white].getColor() == Color.WHITE) { white--; } else if (threads[white].getColor() == Color.RED) { swap(red, white, g); red++; } else { // threads[white].getColor() == Color.BLUE swap(white, blue, g); white--; blue--; } } // assert: red > white so unknown region is now empty. } EXERCISES FOR SECTION 8.11 PROGRAMMING Adapt the Dutch National Flag algorithm to do the quicksort partitioning. Consider the red region to be those values less than the pivot, the white region to be those values equal to the pivot, and the blue region to be those values equal to the pivot. You should initially sort the first, middle, and last items and use the middle value as the pivot value. Chapter Review We analyzed several sorting algorithms; their performance is summarized in Table 8.4. Two quadratic algorithms, O(n2), are selection sort and insertion sort. They give satisfactory performance for small arrays (up to 100 elements). Generally, insertion sort is considered to be the best of the quadratic sorts. Shell sort, O(n5/4), gives satisfactory performance for arrays up to 5000 elements. Quicksort has average-case performance of O(n log n), but if the pivot is picked poorly, the worst-case performance is O(n2). Merge sort and heapsort have O(n log n) performance. The Java API contains “industrial-strength” sort algorithms in the classes java.util.Arrays and java.util.Collections. The methods in Arrays use a mixture of quicksort and insertion sort for sorting arrays of primitive-type values and merge sort for sorting arrays of objects. For primitive types, quicksort is used until the size of the subarray reaches the point where insertion sort is quicker (seven elements or less). The sort method in Collections merely copies the list into an array and then calls Arrays.sort. TABLE 8.4 Comparison of Sort Algorithms Number of Comparisons Best Average Worst Selection sort O(n2) O(n2) O(n2) Insertion sort O(n) O(n2) O(n2) Timsort O(n) O(n log n) O(n log n) Shell sort O(n7/6) O(n5/4) O(n2) Merge sort O(n log n) O(n log n) O(n log n) Heapsort O(n log n) O(n log n) O(n log n) Quicksort O(n log n) O(n log n) O(n2) Java Classes Introduced in This Chapter java.util.Arrays java.util.Collections User-Defined Interfaces and Classes in This Chapter ComparePerson QuickSort InsertionSort SelectionSort MergeSort ShellSort Person Timsort Quick-Check Exercises Name two quadratic sorts. Name two sorts with n log n worst-case behavior. Which algorithm is particularly good for an array that is already sorted? Which is particularly bad? Explain your answers. What determines whether you should use a quadratic sort or a logarithmic sort? Which quadratic sort’s performance is least affected by the ordering of the array elements? Which is most affected? What is a good all-purpose sorting algorithm for medium-size arrays? Review Questions 1. When does quicksort work best, and when does it work worst? 2. Write a recursive procedure to implement the insertion sort algorithm. 3. What is the purpose of the pivot value in quicksort? How did we first select it in the text, and what is wrong with that approach for choosing a pivot value? 4. For the following array 30 40 20 15 60 80 75 4 20 show the new array after each pass of insertion sort and selection sort. How many comparisons and exchanges are performed by each? 5. For the array in Question 4, trace the execution of Shell sort. 6. For the array in Question 4, trace the execution of merge sort. 7. For the array in Question 4, trace the execution of quicksort. 8. For the array in Question 4, trace the execution of heapsort. Programming Projects Use the random number function to store a list of 1000 pseudorandom integer values in an array. Apply each of the sort classes described in this chapter to the array and determine the number of comparisons and exchanges. Make sure the same array is passed to each sort method. Investigate the effect of array size and initial element order on the number of comparisons and exchanges required by each of the sorting algorithms described in this chapter. Use arrays with 100 and 10,000 integers. Use three initial orderings of each array (randomly ordered, inversely ordered, and ordered). Be certain to sort the same six arrays with each sort method. A variation of the merge sort algorithm can be used to sort large sequential data files. The basic strategy is to take the initial data file, read in several (say, 10) data records, sort these records using an efficient array-sorting algorithm, and then write these sorted groups of records (runs) alternately to one of two output files. After all records from the initial data file have been distributed to the two output files, the runs on these output files are merged one pair of runs at a time and written to the original data file. After all runs from the output file have been merged, the records on the original data file are redistributed to the output files, and the merging process is repeated. Runs no longer need to be sorted after the first distribution to the temporary output files.   Each time runs are distributed to the output files, they contain twice as many records as the time before. The process stops when the length of the runs exceeds the number of records in the data file. Write a program that implements merge sort for sequential data files. Test your program on a file with several thousand data values. Write a method that sorts a linked list. Write an industrial-strength quicksort method with the following enhancements: If an array segment contains 20 elements or fewer, sort it using insertion sort. After sorting the first, middle, and last elements, use the median as the pivot instead of swapping the median with the first element. Because the first and last elements are in the correct partitions, it is not necessary to test them before advancing up and down. This is also the case after each exchange, so increment up and decrement down at the beginning of the do–while loop. Also, it is not necessary to test whether up is less than last before incrementing up because the condition pivot.compareTo(last) > 0 is false when up equals last (the median must be ≤ the last element in the array). In the early days of data processing (before computers), data was stored on punched cards. A machine to sort these cards contained 12 bins (one for each digit value and + and −). A stack of cards was fed into the machine, and the cards were placed into the appropriate bin depending on the value of the selected column. By restacking the cards so that all 0s were first, followed by the 1s, followed by the 2s, and so forth, and then sorting on the next column, the whole deck of cards could be sorted. This process, known as radix sort, requires c × n passes, where c is the number of columns and n is the number of cards.   We can simulate the action of this machine using an array of queues. During the first pass, the least-significant digit (the ones digit) of each number is examined and the number is added to the queue whose subscript matches that digit. After all numbers have been processed, the elements of each queue are added to an 11th queue, starting with queue[0], followed by queue[1], and so forth. The process is then repeated for the next significant digit, taking the numbers out of the 11th queue. After all the digits have been processed, the 11th queue will contain the numbers in sorted order.   Write a program that implements radix sort on an array of int values. You will need to make 10 passes because an int can store numbers up to 2,147,483,648. Complete the Dutch National Flag case study. You will need to develop the following classes: A main class that extends JFrame to contain the flag and a control button (Sort). A class to represent the flag; an extension of JPanel is suggested. This class will contain the array of threads and the sort method. A class to represent a thread. Each thread should have a color and a method to draw the thread. Answers to Quick-Check Exercises Selection sort, insertion sort Merge sort, heapsort Insertion sort—it requires n – 1 comparisons with no exchanges. Quicksort can be bad if the first element is picked as the pivot value because the partitioning process always creates one subarray with a single element. Array size Selection sort Shell sort or any O(n log n) sort




Why Choose Us

  • 100% non-plagiarized Papers
  • 24/7 /365 Service Available
  • Affordable Prices
  • Any Paper, Urgency, and Subject
  • Will complete your papers in 6 hours
  • On-time Delivery
  • Money-back and Privacy guarantees
  • Unlimited Amendments upon request
  • Satisfaction guarantee

How it Works

  • Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
  • Fill in your paper’s requirements in the "PAPER DETAILS" section.
  • Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
  • Click “CREATE ACCOUNT & SIGN IN” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page.
  • From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.