How can building a heap be O(n) time complexity?

2018-12-31 23:36发布

问题:

Can someone help explain how can building a heap be O(n) complexity?

Inserting an item into a heap is O(log n), and the insert is repeated n/2 times (the remainder are leaves, and can\'t violate the heap property). So, this means the complexity should be O(n log n), I would think.

In other words, for each item we \"heapify\", it has the potential to have to filter down once for each level for the heap so far (which is log n levels).

What am I missing?

回答1:

I think there are several questions buried in this topic:

  • How do you implement buildHeap so it runs in O(n) time?
  • How do you show that buildHeap runs in O(n) time when implemented correctly?
  • Why doesn\'t that same logic work to make heap sort run in O(n) time rather than O(n log n)?

Often, answers to these questions focus on the difference between siftUp and siftDown. Making the correct choice between siftUp and siftDown is critical to get O(n) performance for buildHeap, but does nothing to help one understand the difference between buildHeap and heapSort in general. Indeed, proper implementations of both buildHeap and heapSort will only use siftDown. The siftUp operation is only needed to perform inserts into an existing heap, so it would be used to implement a priority queue using a binary heap, for example.

I\'ve written this to describe how a max heap works. This is the type of heap typically used for heap sort or for a priority queue where higher values indicate higher priority. A min heap is also useful; for example, when retrieving items with integer keys in ascending order or strings in alphabetical order. The principles are exactly the same; simply switch the sort order.

The heap property specifies that each node in a binary heap must be at least as large as both of its children. In particular, this implies that the largest item in the heap is at the root. Sifting down and sifting up are essentially the same operation in opposite directions: move an offending node until it satisfies the heap property:

  • siftDown swaps a node that is too small with its largest child (thereby moving it down) until it is at least as large as both nodes below it.
  • siftUp swaps a node that is too large with its parent (thereby moving it up) until it is no larger than the node above it.

The number of operations required for siftDown and siftUp is proportional to the distance the node may have to move. For siftDown, it is the distance from the bottom of the tree, so siftDown is expensive for nodes at the top of the tree. With siftUp, the work is proportional to the distance from the top of the tree, so siftUp is expensive for nodes at the bottom of the tree. Although both operations are O(log n) in the worst case, in a heap, only one node is at the top whereas half the nodes lie in the bottom layer. So it shouldn\'t be too surprising that if we have to apply an operation to every node, we would prefer siftDown over siftUp.

The buildHeap function takes an array of unsorted items and moves them until they all satisfy the heap property, thereby producing a valid heap. There are two approaches one might take for buildHeap using the siftUp and siftDown operations we\'ve described.

  1. Start at the top of the heap (the beginning of the array) and call siftUp on each item. At each step, the previously sifted items (the items before the current item in the array) form a valid heap, and sifting the next item up places it into a valid position in the heap. After sifting up each node, all items satisfy the heap property.

  2. Or, go in the opposite direction: start at the end of the array and move backwards towards the front. At each iteration, you sift an item down until it is in the correct location.

Both of these solutions will produce a valid heap. The question is: which implementation for buildHeap is more efficient? Unsurprisingly, it is the second operation that uses siftDown.

Let h = log n represent the height of the heap. The work required for the siftDown approach is given by the sum

(0 * n/2) + (1 * n/4) + (2 * n/8) + ... + (h * 1).

Each term in the sum has the maximum distance a node at the given height will have to move (zero for the bottom layer, h for the root) multiplied by the number of nodes at that height. In contrast, the sum for calling siftUp on each node is

(h * n/2) + ((h-1) * n/4) + ((h-2)*n/8) + ... + (0 * 1).

It should be clear that the second sum is larger. The first term alone is hn/2 = 1/2 n log n, so this approach has complexity at best O(n log n). But how do we prove that the sum for the siftDown approach is indeed O(n)? One method (there are other analyses that also work) is to turn the finite sum into an infinite series and then use Taylor series. We may ignore the first term, which is zero:

\"Taylor

If you aren\'t sure why each of those steps works, here is a justification for the process in words:

  • The terms are all positive, so the finite sum must be smaller than the infinite sum.
  • The series is equal to a power series evaluated at x=1/2.
  • That power series is equal to (a constant times) the derivative of the Taylor series for f(x)=1/(1-x).
  • x=1/2 is within the interval of convergence of that Taylor series.
  • Therefore, we can replace the Taylor series with 1/(1-x), differentiate, and evaluate to find the value of the infinite series.

Since the infinite sum is exactly n, we conclude that the finite sum is no larger, and is therefore, O(n).

The next question is: if it is possible to run buildHeap in linear time, why does heap sort require O(n log n) time? Well, heap sort consists of two stages. First, we call buildHeap on the array, which requires O(n) time if implemented optimally. The next stage is to repeatedly delete the largest item in the heap and put it at the end of the array. Because we delete an item from the heap, there is always an open spot just after the end of the heap where we can store the item. So heap sort achieves a sorted order by successively removing the next largest item and putting it into the array starting at the last position and moving towards the front. It is the complexity of this last part that dominates in heap sort. The loop looks likes this:

for (i = n - 1; i > 0; i--) {
    arr[i] = deleteMax();
}

Clearly, the loop runs O(n) times (n - 1 to be precise, the last item is already in place). The complexity of deleteMax for a heap is O(log n). It is typically implemented by removing the root (the largest item left in the heap) and replacing it with the last item in the heap, which is a leaf, and therefore one of the smallest items. This new root will almost certainly violate the heap property, so you have to call siftDown until you move it back into an acceptable position. This also has the effect of moving the next largest item up to the root. Notice that, in contrast to buildHeap where for most of the nodes we are calling siftDown from the bottom of the tree, we are now calling siftDown from the top of the tree on each iteration! Although the tree is shrinking, it doesn\'t shrink fast enough: The height of the tree stays constant until you have removed the first half of the nodes (when you clear out the bottom layer completely). Then for the next quarter, the height is h - 1. So the total work for this second stage is

h*n/2 + (h-1)*n/4 + ... + 0 * 1.

Notice the switch: now the zero work case corresponds to a single node and the h work case corresponds to half the nodes. This sum is O(n log n) just like the inefficient version of buildHeap that is implemented using siftUp. But in this case, we have no choice since we are trying to sort and we require the next largest item be removed next.

In summary, the work for heap sort is the sum of the two stages: O(n) time for buildHeap and O(n log n) to remove each node in order, so the complexity is O(n log n). You can prove (using some ideas from information theory) that for a comparison-based sort, O(n log n) is the best you could hope for anyway, so there\'s no reason to be disappointed by this or expect heap sort to achieve the O(n) time bound that buildHeap does.



回答2:

Your analysis is correct. However, it is not tight.

It is not really easy to explain why building a heap is a linear operation, you should better read it.

A great analysis of the algorithm can be seen here.


The main idea is that in the build_heap algorithm the actual heapify cost is not O(log n)for all elements.

When heapify is called, the running time depends on how far an element might move down in tree before the process terminates. In other words, it depends on the height of the element in the heap. In the worst case, the element might go down all the way to the leaf level.

Let us count the work done level by level.

At the bottommost level, there are 2^(h)nodes, but we do not call heapify on any of these, so the work is 0. At the next to level there are 2^(h − 1) nodes, and each might move down by 1 level. At the 3rd level from the bottom, there are 2^(h − 2) nodes, and each might move down by 2 levels.

As you can see not all heapify operations are O(log n), this is why you are getting O(n).



回答3:

Intuitively:

\"The complexity should be O(nLog n)... for each item we \"heapify\", it has the potential to have to filter down once for each level for the heap so far (which is log n levels).\"

Not quite. Your logic does not produce a tight bound -- it over estimates the complexity of each heapify. If built from the bottom up, insertion (heapify) can be much less than O(log(n)). The process is as follows:

( Step 1 ) The first n/2 elements go on the bottom row of the heap. h=0, so heapify is not needed.

( Step 2 ) The next n/22 elements go on the row 1 up from the bottom. h=1, heapify filters 1 level down.

( Step i ) The next n/2i elements go in row i up from the bottom. h=i, heapify filters i levels down.

( Step log(n) ) The last n/2log2(n) = 1 element goes in row log(n) up from the bottom. h=log(n), heapify filters log(n) levels down.

NOTICE: that after step one, 1/2 of the elements (n/2) are already in the heap, and we didn\'t even need to call heapify once. Also, notice that only a single element, the root, actually incurs the full log(n) complexity.


Theoretically:

The Total steps N to build a heap of size n, can be written out mathematically.

At height i, we\'ve shown (above) that there will be n/2i+1 elements that need to call heapify, and we know heapify at height i is O(i). This gives:

\"enter

The solution to the last summation can be found by taking the derivative of both sides of the well known geometric series equation:

\"enter

Finally, plugging in x = 1/2 into the above equation yields 2. Plugging this into the first equation gives:

\"enter

Thus, the total number of steps is of size O(n)



回答4:

It would be O(n log n) if you built the heap by repeatedly inserting elements. However, you can create a new heap more efficiently by inserting the elements in arbitrary order and then applying an algorithm to \"heapify\" them into the proper order (depending on the type of heap of course).

See http://en.wikipedia.org/wiki/Binary_heap, \"Building a heap\" for an example. In this case you essentially work up from the bottom level of the tree, swapping parent and child nodes until the heap conditions are satisfied.



回答5:

As we know the height of a heap is log(n), where n is the total number of elements.Lets represent it as h
   When we perform heapify operation, then the elements at last level(h) won\'t move even a single step.
   The number of elements at second last level(h-1) is 2h-1 and they can move at max 1 level(during heapify).
   Similarly, for the ith, level we have 2i elements which can move h-i positions.

Therefore total number of moves=S= 2h*0+2h-1*1+2h-2*2+...20*h

                                               S=2h {1/2 + 2/22 + 3/23+ ... h/2h} -------------------------------------------------1
this is AGP series, to solve this divide both sides by 2
                                               S/2=2h {1/22 + 2/23+ ... h/2h+1} -------------------------------------------------2
subtracting equation 2 from 1 gives
                                               S/2=2h {1/2+1/22 + 1/23+ ...+1/2h+ h/2h+1}
                                               S=2h+1 {1/2+1/22 + 1/23+ ...+1/2h+ h/2h+1}
now 1/2+1/22 + 1/23+ ...+1/2h is decreasing GP whose sum is less than 1 (when h tends to infinity, the sum tends to 1). In further analysis, let\'s take an upper bound on the sum which is 1.
This gives S=2h+1{1+h/2h+1}
                    =2h+1+h
                    ~2h+h
as h=log(n), 2h=n

Therefore S=n+log(n)
T(C)=O(n)



回答6:

While building a heap, lets say you\'re taking a bottom up approach.

  1. You take each element and compare it with its children to check if the pair conforms to the heap rules. So, therefore, the leaves get included in the heap for free. That is because they have no children.
  2. Moving upwards, the worst case scenario for the node right above the leaves would be 1 comparison (At max they would be compared with just one generation of children)
  3. Moving further up, their immediate parents can at max be compared with two generations of children.
  4. Continuing in the same direction, you\'ll have log(n) comparisons for the root in the worst case scenario. and log(n)-1 for its immediate children, log(n)-2 for their immediate children and so on.
  5. So summing it all up, you arrive on something like log(n) + {log(n)-1}*2 + {log(n)-2}*4 + ..... + 1*2^{(logn)-1} which is nothing but O(n).


回答7:

In case of building the heap, we start from height, logn -1 (where logn is the height of tree of n elements). For each element present at height \'h\', we go at max upto (logn -h) height down.

    So total number of traversal would be:-
    T(n) = sigma((2^(logn-h))*h) where h varies from 1 to logn
    T(n) = n((1/2)+(2/4)+(3/8)+.....+(logn/(2^logn)))
    T(n) = n*(sigma(x/(2^x))) where x varies from 1 to logn
     and according to the [sources][1]
    function in the bracket approaches to 2 at infinity.
    Hence T(n) ~ O(n)


回答8:

Successive insertions can be described by:

T = O(log(1) + log(2) + .. + log(n)) = O(log(n!))

By starling approximation, n! =~ O(n^(n + O(1))), therefore T =~ O(nlog(n))

Hope this helps, the optimal way O(n) is using the build heap algorithm for a given set (ordering doesn\'t matter).



回答9:

@bcorso has already demonstrated the proof of the complexity analysis. But for the sake of those still learning complexity analysis, I have this to add:

The basis of your original mistake is due to a misinterpretation of the meaning of the statement, \"insertion into a heap takes O(log n) time\". Insertion into a heap is indeed O(log n), but you have to recognise that n is the size of the heap during the insertion.

In the context of inserting n objects into a heap, the complexity of the ith insertion is O(log n_i) where n_i is the size of the heap as at insertion i. Only the last insertion has a complexity of O (log n).



回答10:

Proof of O(n)

The proof isn\'t fancy, and quite straightforward, I only proved the case for a full binary tree, the result can be generalized for a complete binary tree.



回答11:

I really like explanation by Jeremy west.... another approach which is really easy for understanding is given here http://courses.washington.edu/css343/zander/NotesProbs/heapcomplexity

since, buildheap depends using depends on heapify and shiftdown approach is used which depends upon sum of the heights of all nodes. So, to find the sum of height of nodes which is given by S = summation from i = 0 to i = h of (2^i*(h-i)), where h = logn is height of the tree solving s, we get s = 2^(h+1) - 1 - (h+1) since, n = 2^(h+1) - 1 s = n - h - 1 = n- logn - 1 s = O(n), and so complexity of buildheap is O(n).



回答12:

\"The linear time bound of build Heap, can be shown by computing the sum of the heights of all the nodes in the heap, which is the maximum number of dashed lines. For the perfect binary tree of height h containing N = 2^(h+1) – 1 nodes, the sum of the heights of the nodes is N – H – 1. Thus it is O(N).\"



回答13:

Basically, work is done only on non-leaf nodes while building a heap...and the work done is the amount of swapping down to satisfy heap condition...in other words (in worst case) the amount is proportional to the height of the node...all in all the complexity of the problem is proportional to the sum of heights of all the non-leaf nodes..which is (2^h+1 - 1)-h-1=n-h-1= O(n)



回答14:

Lets suppose you have N elements in a heap. Then its height would be Log(N)

Now you want to insert another element, then the complexity would be : Log(N), we have to compare all the way UP to the root.

Now you are having N+1 elements & height = Log(N+1)

Using induction technique it can be proved that the complexity of insertion would be ∑logi.

Now using

log a + log b = log ab

This simplifies to : ∑logi=log(n!)

which is actually O(NlogN)

But

we are doing something wrong here, as in all the case we do not reach at the top. Hence while executing most of the times we may find that, we are not going even half way up the tree. Whence, this bound can be optimized to have another tighter bound by using mathematics given in answers above.

This realization came to me after a detail though & experimentation on Heaps.



回答15:

think you\'re making a mistake. Take a look at this: http://golang.org/pkg/container/heap/ Building a heap isn\'y O(n). However, inserting is O(lg(n). I\'m assuming initialization is O(n) if you set a heap size b/c the heap needs to allocate space and set up the data structure. If you have n items to put into the heap then yes, each insert is lg(n) and there are n items, so you get n*lg(n) as u stated