Counting sort is known with linear time if we know that all elements in the array are upper bounded by a given number. If we take a general array, cant we just scan the array in linear time, to find the maximum value in the array and then to apply counting sort?
问题:
回答1:
It is not enough to know the upper bound to run a counting sort: you need to have enough memory to fit all the counters.
Consider a situation when you go through an array of 64-bit integers, and find out that the largest element is 2^60. This would mean two things:
- You need an O(2^60) memory, and
- It is going to take O(2^60) to complete the sort.
The fact that O(2^60)
is the same as O(1)
is of little help here, because the constant factor is simply too large. This is very often a problem with pseudo-polynomial time algorithms.
回答2:
Suppose the largest number is like 235684121. Then you'll spend incredible amounts of RAM to keep your buckets.
回答3:
I would like to mention something with @dasblinkenlight and @AlbinSunnanbo answers, your idea to scan the array in O(n)
pass, to find the maximum value in the array is okay. Below is given from Wikipedia:
However, if the value of k is not already known then it may be computed by an additional loop over the data to determine the maximum key value that actually occurs within the data.
As the time complexity is O(n + k)
and k
should be under a certain limit, your found k
should be small. As @dasblinkenlight mentioned, O(large_value)
can't practically be converged to O(1)
.
Though I don't know about any major applications of Counting sort so far except used as a subroutine of Radix Sort, it can be nicely used in problems like string sorting( i.e. sort "android" to "addnoir") as here k
is only 255.