Both quicksort and heapsort do in-place sorting. Which is better? What are the applications and cases in which either is preferred?
相关问题
- How to toggle on Order in ReactJS
- PHP Recursively File Folder Scan Sorted by Modific
- Finding k smallest elements in a min heap - worst-
- binary search tree path list
- Change sort order of strings includes with a speci
相关文章
- What are the problems associated to Best First Sea
- Coin change DP solution to keep track of coins
- Sort TreeView Automatically Upon Adding Nodes
- Algorithm for partially filling a polygonal mesh
- Robust polygon normal calculation
- Algorithm for maximizing coverage of rectangular a
- Why does array_unique sort the values?
- How to measure complexity of a string?
http://www.cs.auckland.ac.nz/~jmor159/PLDS210/qsort3.html has some analysis.
Also, from Wikipedia:
Heap Sort is a safe bet when dealing with very large inputs. Asymptotic analysis reveals order of growth of Heapsort in the worst case is
Big-O(n logn)
, which is better than Quicksort'sBig-O(n^2)
as a worst case. However, Heapsort is somewhat slower in practice on most machines than a well-implemented quick sort. Heapsort is also not a stable sorting algorithm.The reason heapsort is slower in practice than quicksort is due to the better locality of reference ("https://en.wikipedia.org/wiki/Locality_of_reference") in quicksort, where data elements are within relatively close storage locations. Systems that exhibit strong locality of reference are great candidates for performance optimization. Heap sort, however, deals with larger leaps. This makes quicksort more favorable for smaller inputs.
For most situations, having quick vs. a little quicker is irrelevant... you simply never want it to occasionally get waayyy slow. Although you can tweak QuickSort to avoid the way slow situations, you lose the elegance of the basic QuickSort. So, for most things, I actually prefer HeapSort... you can implement it in its full simple elegance, and never get a slow sort.
For situations where you DO want max speed in most cases, QuickSort may be preferred over HeapSort, but neither may be the right answer. For speed-critical situations, it is worth examining closely the details of the situation. For example, in some of my speed-critical code, it is very common that the data is already sorted or near-sorted (it is indexing multiple related fields that often either move up and down together OR move up and down opposite each other, so once you sort by one, the others are either sorted or reverse-sorted or close... either of which can kill QuickSort). For that case, I implemented neither... instead, I implemented Dijkstra's SmoothSort... a HeapSort variant that is O(N) when already sorted or near-sorted... it is not so elegant, not too easy to understand, but fast... read http://www.cs.utexas.edu/users/EWD/ewd07xx/EWD796a.PDF if you want something a bit more challenging to code.
Comp. between
quick sort
andmerge sort
since both are type of in place sorting there is a difference between wrost case running time of the wrost case running time for quick sort isO(n^2)
and for heap sort it is stillO(n*log(n))
and for a average amount of data quick sort will be more useful. Since it is randomized algorithm so the probability of getting correct ans. in less time will depend on the position of pivot element you choose.So a
Good call: the sizes of L and G are each less than 3s/4
Bad call: one of L and G has size greater than 3s/4
for small amount we can go for insertion sort and for very large amount of data go for heap sort.
Quicksort-Heapsort in-place hybrids are really interesting, too, since most of them only needs n*log n comparisons in the worst case (they are optimal with respect to the first term of the asymptotics, so they avoid the worst-case scenarios of Quicksort), O(log n) extra-space and they preserve at least "a half" of the good behaviour of Quicksort with respect to already-ordered set of data. An extremely interesting algorithm is presented by Dikert and Weiss in http://arxiv.org/pdf/1209.4214v1.pdf:
Heapsort builds a heap and then repeatedly extracts the maximum item. Its worst case is O(n log n).
But if you would see the worst case of quick sort, which is O(n2), you would realized that quick sort would be a not-so-good choice for large data.
So this makes sorting is an interesting thing; I believe the reason so many sorting algorithms live today is because all of them are 'best' at their best places. For instance, bubble sort can out perform quick sort if the data is sorted. Or if we know something about the items to be sorted then probably we can do better.
This may not answer your question directly, thought I'd add my two cents.