I know what is the difference between unshift() and push() methods in Javascript, but I'm wondering what is the difference in time complexity?
I suppose for push() method is O(1) because you're just adding an item to the end of array, but I'm not sure for unshift() method, because, I suppose you must "move" all the other existing elements forward and I suppose that is O(log n) or O(n)?
The JavaScript language spec does not mandate the time complexity of these functions, as far as I know.
It is certainly possible to implement an array-like data structure (O(1) random access) with O(1) push
and unshift
operations. The C++ std::deque
is an example. A Javascript implementation that used C++ deques to represent Javascript arrays internally would therefore have O(1) push
and unshift
operations.
But if you need to guarantee such time bounds, you will have to roll your own, like this:
http://code.stephenmorley.org/javascript/queues/
push() is faster.
js>function foo() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.unshift(1); return((new Date)-start)}
js>foo()
2190
js>function bar() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.push(1); return((new Date)-start)}
js>bar()
10
imho it depends on the javascript-engine...
if it will use a linked list, unshift should be quite cheap...
One way of implementing Arrays with both fast unshift and push is to simply put your data into the middle of your C-level array. That's how perl does it, IIRC.
Another way to do it is have two separate C-level arrays, so that push appends to one of them, and unshift appends to the other. There's no real benefit to this approach over the previous one, that I know of.
Regardless of how it's implemented, a push or and unshift will take O(1) time when the internal C-level array has enough spare memory, otherwise, when reallocation must be done, at least O(N) time to copy the old data to the new block of memory.
For people curious about the v8 implementation here is the source. Because unshift
takes an arbitrary number of arguments, the array will shift itself to accommodate all arguments.
UnshiftImpl
ends up calling AddArguments
with a start_position
of AT_START
which kicks it to this else
statement
// If the backing store has enough capacity and we add elements to the
// start we have to shift the existing objects.
Isolate* isolate = receiver->GetIsolate();
Subclass::MoveElements(isolate, receiver, backing_store, add_size, 0,
length, 0, 0);
and takes it to the MoveElements
.
static void MoveElements(Isolate* isolate, Handle<JSArray> receiver,
Handle<FixedArrayBase> backing_store, int dst_index,
int src_index, int len, int hole_start,
int hole_end) {
Heap* heap = isolate->heap();
Handle<BackingStore> dst_elms = Handle<BackingStore>::cast(backing_store);
if (len > JSArray::kMaxCopyElements && dst_index == 0 &&
heap->CanMoveObjectStart(*dst_elms)) {
// Update all the copies of this backing_store handle.
*dst_elms.location() =
BackingStore::cast(heap->LeftTrimFixedArray(*dst_elms, src_index))
->ptr();
receiver->set_elements(*dst_elms);
// Adjust the hole offset as the array has been shrunk.
hole_end -= src_index;
DCHECK_LE(hole_start, backing_store->length());
DCHECK_LE(hole_end, backing_store->length());
} else if (len != 0) {
WriteBarrierMode mode = GetWriteBarrierMode(KindTraits::Kind);
dst_elms->MoveElements(heap, dst_index, src_index, len, mode);
}
if (hole_start != hole_end) {
dst_elms->FillWithHoles(hole_start, hole_end);
}
}
I also want to call out that v8 has a concept of different element kinds
depending what the array contains. This also can affect the performance.