Heap Sort vs Merge Sort in Speed [duplicate] - java

This question already has answers here:
Quicksort superiority over Heap Sort
(6 answers)
Closed 4 years ago.
Which algorithm is faster when iterating through a large array: heap sort or merge sort? Why is one of these algorithms faster than the other?

Although time complexity is the same, the constant factors are not. Generally merge sort will be significantly faster on a typical system with a 4 or greater way cache, since merge sort will perform sequential reads from two runs and sequential writes to a single merged run. I recall a merge sort written in C was faster than an optimized heap sort written in assembly.
One issue is that heap sort swaps data, that's two reads and two writes per swap, while merge sort moves data, one read and one write per move.
The main drawback for merge sort is a second array (or vector) of the same size as the original (or optionally 1/2 the size of the original) is needed for working storage, on a PC with 4 GB or more of RAM, this usually isn't an issue.
On my system, Intel 3770K 3.5 ghz, Windows 7 Pro 64 bit, Visual Studio 2015, to sort 2^24 = 16,777,216 64 bit unsigned integers, heap sort takes 7.98 seconds while bottom up merge sort takes 1.59 seconds and top down merge sort takes 1.65 seconds.

Both sort methods have the same time complexity, and are optimal. The time required to merge in a merge sort is counterbalanced by the time required to build the heap in heapsort. The merge sort requires additional space. The heapsort may be implemented using additional space, but does not require it. Heapsort, however, is unstable, in that it doesn't guarantee to leave 'equal' elements unchanged. If you test both methods fairly and under the same conditions, the differences will be minimal.

Related

QuickSort and MergeSort performance on Sequential data fit in memory vs Slow to Access Sequential data on disk

The following quote is from "Comparison with other sort algorithms"
section from Wikipedia Merge Sort page
On typical modern architectures, efficient quicksort implementations
generally outperform mergesort for sorting RAM-based arrays.[citation
needed] On the other hand, merge sort is a stable sort and is more
efficient at handling slow-to-access sequential media.
My questions:
Why does Quicksort outperform Mergesort when the data to be sorted can all fit into memory? If all data needed are cached or in memory wouldn't it be fast for both Quicksort and Mergesort to access?
Why is Mergesort more efficient at handling slow-to-access sequential data (such as from disk in the case where the data to be sorted can't all fit into memory)?
(move from my comments below to here)In an array arr of primitives (data are sequential) of n elements. The pair of elements that has to be read and compared in MergeSort is arr[0] and arr[n/2] (happens in the final merge). Now think the pair of elements that has to be read and compared in QuickSort is arr[1] and arr[n] (happens in the first partition, assume we swap the randomly chosen pivot with the first element). We know data are read in blocks and load into cache, or disk to memory (correct me if I am wrong) then isn't there a better chance for the needed data gets load together in one block when using MergeSort? It just seems to me MergeSort would always have the upperhand because it is likely comparing elements that are closer together. I know this is False (see graph below) because QuickSort is obviously faster...... I know MergeSort is not in place and requires extra memory and that is likely to slow things down. Other than that what pieces am I missing in my analysis?
images are from Princeton CS MergeSort and QuickSort slides
My Motive:
I want to understand these above concepts because they are one of the main reasons of why mergeSort is preferred when sorting LinkedList,or none sequential data and quickSort is preferred when sorting Array, or sequential data. And why mergeSort is used to sort Object in Java and quickSort is used to sort primitive type in java.
update: Java 7 API actually uses TimSort to sort Object, which is a hybrid of MergeSort and InsertionSort. For primitives Dual-Pivot QuickSort. These changes were implemented starting in Java SE 7. This has to do with the stability of the sorting algorithm. Why does Java's Arrays.sort method use two different sorting algorithms for different types?
Edit:
I will appreciate an answer that addresses the following aspects:
I know the two sorting algorithms differ in the number of moves, read, and comparisons. If those are that reasons contribute to the behaviors I see listed in my questions (I suspected it) then a thorough explanation of how the steps and process of the sorting algorithm results it having advantages or disadvantages seeking data from disk or memory will be much appreciated.
Examples are welcome. I learn better with examples.
note: if you are reading #rcgldr's answer. check out our conversation in the chat room it has lots of good explanations and details. https://chat.stackoverflow.com/rooms/161554/discussion-between-rcgldr-and-oliver-koo
The main difference is that merge sort does more moves, but fewer compares than quick sort. Even in the case of sorting an array of native types, quick sort is only around 15% faster, at least when I've tested it on large arrays of pseudo random 64 bit unsigned integers, which should be quick sort's best case, on my system (Intel 3770K 3.5ghz, Windows 7 Pro 64 bit, Visual Studio 2015, sorting 16 million pseudo random 64 bit unsigned integers, 1.32 seconds for quick sort, 1.55 seconds for merge sort, 1.32/1.55 ~= 0.85, so quick sort was about 15% faster than merge sort). My test was with a quick sort that had no checks to avoid worst case O(n^2) time or O(n) space. As checks are added to quick sort to reduce or prevent worst case behavior (like fall back to heap sort if recursion becomes too deep), the speed advantage decreases to less than 10% (which is the difference I get between VS2015's implementation of std::sort (modified quick sort) versus std::stable_sort (modified merge sort).
If sorting "strings", it's more likely that what is being sorted is an array of pointers (or references) to those strings. This is where merge sort is faster, because the moves involve pointers, while the compares involve a level of indirection and comparison of strings.
The main reason for choosing quick sort over merge sort is not speed, but space requirement. Merge sort normally uses a second array the same size as the original. Quick sort and top down merge sort also need log(n) stack frames for recursion, and for quick sort limiting stack space to log(n) stack frames is done by only recursing on the smaller partition, and looping back to handle the larger partition.
In terms of cache issues, most recent processors have 4 or 8 way associative caches. For merge sort, during a merge, the two input runs will end up in 2 of the cache lines, and the one output run in a 3rd cache line. Quick sort scans the data before doing swaps, so the scanned data will be in cache, although in separate lines if the two elements being compared / swapped are located far enough from each other.
For an external sort, some variation of bottom up merge sort is used. This because merge sort merge operations are sequential (the only random access occurs when starting up a new pair of runs), which is fast in the case of hard drives, or in legacy times, tape drives (a minimum of 3 tapes drives is needed). Each read or write can be for very large blocks of data, reducing average access time per element in the case of a hard drive, since a large number of elements are read or written at a time with each I/O.
It should also be noted that most merge sorts in libraries are also some variation of bottom up merge sort. Top down merge sort is mostly a teaching environment implementation.
If sorting an array of native types on a processor with 16 registers, such as an X86 in 64 bit mode, 8 of the registers used as start + end pointers (or references) for 4 runs, then a 4-way merge sort is often about the same or a bit faster than quick sort, assuming a compiler optimizes the pointers or references to be register based. It's a similar trade off, like quick sort, 4-way merge sort does more compares (1.5 x compares), but fewer moves (0.5 x moves) than traditional 2-way merge sort.
It should be noted that these sorts are cpu bound, not memory bound. I made a multi-threaded version of a bottom up merge sort, and in the case of using 4 threads, the sort was 3 times faster. Link to Windows example code using 4 threads:
https://codereview.stackexchange.com/questions/148025/multithreaded-bottom-up-merge-sort

Java - Large array advice on how to break it down [duplicate]

I'm trying to find a counterexample to the Pólya Conjecture which will be somewhere in the 900 millions. I'm using a very efficient algorithm that doesn't even require any factorization (similar to a Sieve of Eratosthenes, but with even more information. So, a large array of ints is required.
The program is efficient and correct, but requires an array up to the x i want to check for (it checks all numbers from (2, x)). So, if the counterexample is in the 900 millions, I need an array that will be just as large. Java won't allow me anything over about 20 million. Is there anything I can possibly do to get an array that large?
You may want to extend the max size of the JVM Heap. You can do that with a command line option.
I believe it is -Xmx3600m (3600 megabytes)
Java arrays are indexed by int, so an array can't get larger than 2^31 (there are no unsigned ints). So, the maximum size of an array is 2147483648, which consumes (for a plain int[]) 8589934592 bytes (= 8GB).
Thus, the int-index is usually not a limitation, since you would run out of memory anyway.
In your algorithm, you should use a List (or a Map) as your data structure instead, and choose an implementation of List (or Map) that can grow beyond 2^31. This can get tricky, since the "usual" implementation ArrayList (and HashMap) uses arrays internally. You will have to implement a custom data structure; e.g. by using a 2-level array (a list/array). When you are at it, you can also try to pack the bits more tightly.
Java will allow up to 2 billions array entries. It’s your machine (and your limited memory) that can not handle such a large amount.
900 million 32 bit ints with no further overhead - and there will always be more overhead - would require a little over 3.35 GiB. The only way to get that much memory is with a 64 bit JVM (on a machine with at least 8 GB of RAM) or use some disk backed cache.
If you don't need it all loaded in memory at once, you could segment it into files and store on disk.
What do you mean by "won't allow". You probably getting an OutOfMemoryError, so add more memory with the -Xmx command line option.
You could define your own class which stores the data in a 2d array which would be closer to sqrt(n) by sqrt(n). Then use an index function to determine the two indices of the array. This can be extended to more dimensions, as needed.
The main problem you will run into is running out of RAM. If you approach this limit, you'll need to rethink your algorithm or consider external storage (ie a file or database).
If your algorithm allows it:
Compute it in slices which fit into memory.
You will have to redo the computation for each slice, but it will often be fast enough.
Use an array of a smaller numeric type such as byte.
Depending on how you need to access the array, you might find a RandomAccessFile will allow you to use a file which is larger than will fit in memory. However, the performance you get is very dependant on your access behaviour.
I wrote a version of the Sieve of Eratosthenes for Project Euler which worked on chunks of the search space at a time. It processes the first 1M integers (for example), but keeps each prime number it finds in a table. After you've iterated over all the primes found so far, the array is re-initialised and the primes found already are used to mark the array before looking for the next one.
The table maps a prime to its 'offset' from the start of the array for the next processing iteration.
This is similar in concept (if not in implementation) to the way functional programming languages perform lazy evaluation of lists (although in larger steps). Allocating all the memory up-front isn't necessary, since you're only interested in the parts of the array that pass your test for primeness. Keeping the non-primes hanging around isn't useful to you.
This method also provides memoisation for later iterations over prime numbers. It's faster than scanning your sparse sieve data structure looking for the ones every time.
I second #sfossen's idea and #Aaron Digulla. I'd go for disk access. If your algorithm can take in a List interface rather than a plain array, you could write an adapter from the List to the memory mapped file.
Use Tokyo Cabinet, Berkeley DB, or any other disk-based key-value store. They're faster than any conventional database but allow you to use the disk instead of memory.
could you get by with 900 million bits? (maybe stored as a byte array).
You can try splitting it up into multiple arrays.
for(int x = 0; x <= 1000000; x++){
myFirstList.add(x);
}
for(int x = 1000001; x <= 2000000; x++){
mySecondList.add(x);
}
then iterate over them.
for(int x: myFirstList){
for(int y: myFirstList){
//Remove multiples
}
}
//repeat for second list
Use a memory mapped file (Java 5 NIO package) instead. Or move the sieve into a small C library and use Java JNI.

Difference between Arrays.sort() and Arrays.parallelSort()

Was going through Java 8 features, mentioned here. Couldn't understand what parallelSort() does exactly. Can someone explain what is the actual difference between sort() and parallelSort()?
Parallel sort uses threading - each thread gets a chunk of the list and all the chunks are sorted it in parallel. These sorted chunks are then merged into a result.
It's faster when there are a lot of elements in the collection. The overhead for parallelization (splitting into chunks and merging) becomes tolerably small on larger collections, but it is large for smaller ones.
Take a look at this table (of course, the results depend on the CPU, number of cores, background processes, etc):
Taken from this link: http://www.javacodegeeks.com/2013/04/arrays-sort-versus-arrays-parallelsort.html
Arrays.parallelSort() :
The method uses a threshold value and any array of size lesser than the threshold value is sorted using the Arrays#sort() API (i.e sequential sorting). And the threshold is calculated considering the parallelism of the machine, size of the array and is calculated as:
private static final int getSplitThreshold(int n) {
int p = ForkJoinPool.getCommonPoolParallelism();
int t = (p > 1) ? (1 + n / (p << 3)) : n;
return t < MIN_ARRAY_SORT_GRAN ? MIN_ARRAY_SORT_GRAN : t;
}
Once its decided whether to sort the array in parallel or in serial, its now to decide how to divide the array in to multiple parts and then assign each part to a Fork/Join task which will take care of sorting it and then another Fork/Join task which will take care of merging the sorted arrays. The implementation in JDK 8 uses this approach:
Divide the array into 4 parts.
Sort the first two parts and then merge them.
Sort the next two parts and then merge them.
And the above steps are repeated recursively with each part until the size of the part to sort is not lesser than the threshold value calculated above.
You can also read the implementation details in the Javadoc
The sorting algorithm is a parallel sort-merge that breaks the array into sub-arrays that are themselves sorted and then merged. When the sub-array length reaches a minimum granularity, the sub-array is sorted using the appropriate Arrays.sort method. If the length of the specified array is less than the minimum granularity, then it is sorted using the appropriate Arrays.sort method. The algorithm requires a working space no greater than the size of the specified range of the original array. The ForkJoin common pool is used to execute any parallel tasks.
Array.sort():
This uses merge sort OR Tim Sort underneath to sort the contents. This is all done sequentially, even though merge sort uses divide and conquer technique, its all done sequentially.
Source
The key differences between both the algorithm are as follow :
1. Arrays.sort() : is a sequential sorting.
The API uses single thread for the operation.
The API takes bit longer time to perform the operation.
2. Arrays.ParallelSort() : is a parallel sorting.
The API uses multiple threads.
The API takes lesser the time compared to Sort().
For more results, we all have to wait for JAVA 8 I guess !! cheers !!
You can refer to the javadoc, which explains that the algorithm uses several threads if the array is large enough:
The sorting algorithm is a parallel sort-merge that breaks the array into sub-arrays that are themselves sorted and then merged. When the sub-array length reaches a minimum granularity, the sub-array is sorted using the appropriate Arrays.sort method. [...] The ForkJoin common pool is used to execute any parallel tasks.
In a nutshell, parallelSort uses multiple threads. This article has way more detail if you really want to know.
From this link
Current sorting implementations provided by the Java Collections
Framework (Collections.sort and Arrays.sort) all perform the sorting
operation sequentially in the calling thread. This enhancement will
offer the same set of sorting operations currently provided by the
Arrays class, but with a parallel implementation that utilizes the
Fork/Join framework. These new API’s are still synchronous with regard
to the calling thread as it will not proceed past the sorting
operation until the parallel sort is complete.
Array.sort(myArray);
You can now use –
Arrays.parallelSort(myArray);
This will automatically break up the target collection into several parts, which will be sorted independently across a number of cores and then grouped back together. The only caveat here is that when called in highly multi-threaded environments, such as a busy web container, the benefits of this approach will begin to diminish (by more than 90%) due to the cost of increased CPU context switches.
Source- link

How to create array of size greater than integer max [duplicate]

This question already has answers here:
Java array with more than 4gb elements
(11 answers)
Closed 8 years ago.
I was trying to get all primes before 600851475143.
I was using Sieve of Eratosthenes for this.
This requires me to create a boolean array of that huge size.
Bad idea, you can run out of memory.
Any other way. I tried using a string, using each index with values 0 & 1 to represent true or false. but indexOf method too returns int.
Next i am using 2d array for my problem.
Any other better way to store such a huge array?
The memory requirement for 600851475143 booleans is at best 70Gb. This isn't feasible. You need to either use compression as suggested by Stephan, or find a different algorithm for calculating the primes.
I had a similar problem and i used a bit set (basically set 1 or 0 for the desired offset in order) and i recomend using EWAHCompressedBitmap it will also compress your bit set
EDIT
As Alan said the BitSet will occupy 70GB of memory but you can do another thing : to have multiple BitSets (consecutive ones so that you can calculate the absolute position) and load in memory just the BitSet that you need in that moment something like a lazy load, in this case you will have control of the memory used.
Its not really practical to remember for each number if it was a prime or not for such a large amount (the sieve is a very slow approach for large numbers in general).
From this link you get an idea how many primes there are to be expected smaller than X. For your 600 billion range you can expect roughly 20 billion primes to exist within that range. Storing them as long[] would require about 160GB of memory... that notably more than the suggested 70GB for storing a single bit for each number, half if you exclude even numbers (2 is the only even prime).
For a desktop computer 35GB in memory may be a bit much, but a good workstation can have that much RAM. I would try a two-dimensional array with bit shifting/masking.
I still would expect your sieve code to run a considerable amount of time (something from days to years). I suggest you investigate more advanced prime detection methods than sieve.
You could use HotSpot's internal sun.misc.Unsafe API to allocate a bigger array. I wrote a blogpost how to simulate an array with it However, it's not an official Java API, so it qualifies as a hack.
Use BitSet. You can then set bit any index element. 600851475143 is 2^39 thus taking only 39 bits internally (actually in reality it will occupy 64 bits as it uses long).
You can infact move upto 2^63 which is massive for most purposes

java - Fast parallel sort to put unsigned integers into order?

I am on a mission of sorting somewhat large array of unsigned, 64-bit, randomly generated integers (over 5E7 elements). Can you direct me to a parallel sorting algorithm that might exhibit almost linear speedup at least in the case of random data?
I am working with Java, in case it makes any difference with regard to fast sorting.
Edit: Note that this question is primarily concerned with parallel sorts capable to achieve near-linear speedup. (Meaning, when the amount of executing cores grows from P to 2P, the time spent by a parallel sort drops to 55 - 50 percent of the computation performed on P cores.)
Well if you got a lot of memory you can use Bucketsort. One other algorithm that goes well with parallelism is Quicksort
From the Wikipedia article on Quicksort,
Like merge sort, quicksort can also be parallelized due to its
divide-and-conquer nature. Individual in-place partition operations
are difficult to parallelize, but once divided, different sections of
the list can be sorted in parallel. The following is a straightforward
approach: If we have processors, we can divide a list of elements
into sublists in O(n) average time, then sort each of these in
average time. Ignoring the O(n) preprocessing and merge times, this is
linear speedup. If the split is blind, ignoring the values, the merge
naïvely costs O(n). If the split partitions based on a succession of
pivots, it is tricky to parallelize and naïvely costs O(n). Given
O(log n) or more processors, only O(n) time is required overall,
whereas an approach with linear speedup would achieve O(log n) time
for overall.
Obviously mergesort is another alternative. I think quicksort gives better average-case performance.
Quicksort and merge sort are both fairly easy to parallelize. Oracle has a fork/join-based integer merge sort here, which you could probably use (if not as-is, then at least as inspiration).
Say you have a few computers (5 on amazon cluster right?) and you want ascending sorting. Split your array into smaller chunks so it fits on each machine.
Assuming you have n chunks/arrays. Have each machine quicksort its chunk. This sorting
will be in parallel (more or less depending on chunk size and machine speed etc).
When done sorintg, have the machines merge the chunks;
You can do this in 2 ways:
2 machines at a time (you're building a merge tree). The merging will happen, again, in parallel. The problem is that the array will grow big due to merging and you have to cache to disk, so when you merge again the machine reads from disk. So some penalty here.
You can do n machines at a time. Have one coordinator machine which takes the min from all the other machines' arrays. This way the coordinator machine builds the entire sorted array by taking the smallest number from each of the other sorted arrays.
Bitonic sort is an algorithm targeted for parallel machines. Here is a sequential Java version and a parallel C++ version to help you get started.

Categories