What's the fastest algorithm for sorting a lin

2019-01-03 12:04发布

I'm curious if O(n log n) is the best a linked list can do.

11条回答
我只想做你的唯一
2楼-- · 2019-01-03 12:14

Depending on a number of factors, it may actually be faster to copy the list to an array and then use a Quicksort.

The reason this might be faster is that an array has much better cache performance than a linked list. If the nodes in the list are dispersed in memory, you may be generating cache misses all over the place. Then again, if the array is large you will get cache misses anyway.

Mergesort parallelises better, so it may be a better choice if that is what you want. It is also much faster if you perform it directly on the linked list.

Since both algorithms run in O(n * log n), making an informed decision would involve profiling them both on the machine you would like to run them on.

--- EDIT

I decided to test my hypothesis and wrote a C-program which measured the time (using clock()) taken to sort a linked list of ints. I tried with a linked list where each node was allocated with malloc() and a linked list where the nodes were laid out linearly in an array, so the cache performance would be better. I compared these with the built-in qsort, which included copying everything from a fragmented list to an array and copying the result back again. Each algorithm was run on the same 10 data sets and the results were averaged.

These are the results:

N = 1000:

Fragmented list with merge sort: 0.000000 seconds

Array with qsort: 0.000000 seconds

Packed list with merge sort: 0.000000 seconds

N = 100000:

Fragmented list with merge sort: 0.039000 seconds

Array with qsort: 0.025000 seconds

Packed list with merge sort: 0.009000 seconds

N = 1000000:

Fragmented list with merge sort: 1.162000 seconds

Array with qsort: 0.420000 seconds

Packed list with merge sort: 0.112000 seconds

N = 100000000:

Fragmented list with merge sort: 364.797000 seconds

Array with qsort: 61.166000 seconds

Packed list with merge sort: 16.525000 seconds

Conclusion:

At least on my machine, copying into an array is well worth it to improve the cache performance, since you rarely have a completely packed linked list in real life. It should be noted that my machine has a 2.8GHz Phenom II, but only 0.6GHz RAM, so the cache is very important.

查看更多
再贱就再见
3楼-- · 2019-01-03 12:16

It is reasonable to expect that you cannot do any better than O(N log N) in running time.

However, the interesting part is to investigate whether you can sort it in-place, stably, its worst-case behavior and so on.

Simon Tatham, of Putty fame, explains how to sort a linked list with merge sort. He concludes with the following comments:

Like any self-respecting sort algorithm, this has running time O(N log N). Because this is Mergesort, the worst-case running time is still O(N log N); there are no pathological cases.

Auxiliary storage requirement is small and constant (i.e. a few variables within the sorting routine). Thanks to the inherently different behaviour of linked lists from arrays, this Mergesort implementation avoids the O(N) auxiliary storage cost normally associated with the algorithm.

There is also an example implementation in C that work for both singly and doubly linked lists.

As @Jørgen Fogh mentions below, big-O notation may hide some constant factors that can cause one algorithm to perform better because of memory locality, because of a low number of items, etc.

查看更多
啃猪蹄的小仙女
4楼-- · 2019-01-03 12:16

Here's an implementation that traverses the list just once, collecting runs, then schedules the merges in the same way that mergesort does.

Complexity is O(n log m) where n is the number of items and m is the number of runs. Best case is O(n) (if the data is already sorted) and worst case is O(n log n) as expected.

It requires O(log m) temporary memory; the sort is done in-place on the lists.

(updated below. commenter one makes a good point that I should describe it here)

The gist of the algorithm is:

    while list not empty
        accumulate a run from the start of the list
        merge the run with a stack of merges that simulate mergesort's recursion
    merge all remaining items on the stack

Accumulating runs doesn't require much explanation, but it's good to take the opportunity to accumulate both ascending runs and descending runs (reversed). Here it prepends items smaller than the head of the run and appends items greater than or equal to the end of the run. (Note that prepending should use strict less-than to preserve sort stability.)

It's easiest to just paste the merging code here:

    int i = 0;
    for ( ; i < stack.size(); ++i) {
        if (!stack[i])
            break;
        run = merge(run, stack[i], comp);
        stack[i] = nullptr;
    }
    if (i < stack.size()) {
        stack[i] = run;
    } else {
        stack.push_back(run);
    }

Consider sorting the list (d a g i b e c f j h) (ignoring runs). The stack states proceed as follows:

    [ ]
    [ (d) ]
    [ () (a d) ]
    [ (g), (a d) ]
    [ () () (a d g i) ]
    [ (b) () (a d g i) ]
    [ () (b e) (a d g i) ]
    [ (c) (b e) (a d g i ) ]
    [ () () () (a b c d e f g i) ]
    [ (j) () () (a b c d e f g i) ]
    [ () (h j) () (a b c d e f g i) ]

Then, finally, merge all these lists.

Note that the number of items (runs) at stack[i] is either zero or 2^i and the stack size is bounded by 1+log2(nruns). Each element is merged once per stack level, hence O(n log m) comparisons. There's a passing similarity to Timsort here, though Timsort maintains its stack using something like a Fibonacci sequence where this uses powers of two.

Accumulating runs takes advantage of any already sorted data so that best case complexity is O(n) for an already sorted list (one run). Since we're accumulating both ascending and descending runs, runs will always be at least length 2. (This reduces the maximum stack depth by at least one, paying for the cost of finding the runs in the first place.) Worst case complexity is O(n log n), as expected, for data that is highly randomized.

(Um... Second update.)

Or just see wikipedia on bottom-up mergesort.

查看更多
孤傲高冷的网名
5楼-- · 2019-01-03 12:17

This is a nice little paper on this topic. His empirical conclusion is that Treesort is best, followed by Quicksort and Mergesort. Sediment sort, bubble sort, selection sort perform very badly.

A COMPARATIVE STUDY OF LINKED LIST SORTING ALGORITHMS by Ching-Kuang Shene

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.9981

查看更多
成全新的幸福
6楼-- · 2019-01-03 12:20

As stated many times, the lower bound on comparison based sorting for general data is going to be O(n log n). To briefly resummarize these arguments, there are n! different ways a list can be sorted. Any sort of comparison tree that has n! (which is in O(n^n)) possible final sorts is going to need at least log(n!) as its height: this gives you a O(log(n^n)) lower bound, which is O(n log n).

So, for general data on a linked list, the best possible sort that will work on any data that can compare two objects is going to be O(n log n). However, if you have a more limited domain of things to work in, you can improve the time it takes (at least proportional to n). For instance, if you are working with integers no larger than some value, you could use Counting Sort or Radix Sort, as these use the specific objects you're sorting to reduce the complexity with proportion to n. Be careful, though, these add some other things to the complexity that you may not consider (for instance, Counting Sort and Radix sort both add in factors that are based on the size of the numbers you're sorting, O(n+k) where k is the size of largest number for Counting Sort, for instance).

Also, if you happen to have objects that have a perfect hash (or at least a hash that maps all values differently), you could try using a counting or radix sort on their hash functions.

查看更多
闹够了就滚
7楼-- · 2019-01-03 12:24

Mergesort is the best you can do here.

查看更多
登录 后发表回答