Copying a shuffled range(10**6)
list ten times takes me about 0.18 seconds: (these are five runs)
0.175597017661
0.173731403198
0.178601711594
0.180330912952
0.180811964451
Copying the unshuffled list ten times takes me about 0.05 seconds:
0.058402235973
0.0505464636856
0.0509734306934
0.0526022752744
0.0513324916184
Here's my testing code:
from timeit import timeit
import random
a = range(10**6)
random.shuffle(a) # Remove this for the second test.
a = list(a) # Just an attempt to "normalize" the list.
for _ in range(5):
print timeit(lambda: list(a), number=10)
I also tried copying with a[:]
, the results were similar (i.e., big speed difference)
Why the big speed difference? I know and understand the speed difference in the famous Why is it faster to process a sorted array than an unsorted array? example, but here my processing has no decisions. It's just blindly copying the references inside the list, no?
I'm using Python 2.7.12 on Windows 10.
Edit: Tried Python 3.5.2 as well now, the results were almost the same (shuffled consistently around 0.17 seconds, unshuffled consistently around 0.05 seconds). Here's the code for that:
a = list(range(10**6))
random.shuffle(a)
a = list(a)
for _ in range(5):
print(timeit(lambda: list(a), number=10))
Before the shuffle, when allocated in the heap, the adjacent index objects are adjacent in memory, and the memory hit rate is high when accessed; after shuffle, the object of the adjacent index of the new list is not in memory. Adjacent, the hit rate is very poor.
The interesting bit is that it depends on the order in which the integers are first created. For example instead of
shuffle
create a random sequence withrandom.randint
:This is as fast as copying your
list(range(10**6))
(first and fast example).However when you shuffle - then your integers aren't in the order they were first created anymore, that's what makes it slow.
A quick intermezzo:
Py_INCREF
inlist_slice
), so Python really needs to go to where the object is. It can't just copy the reference.So when you copy your list you get each item of that list and put it "as is" in the new list. When your next item was created shortly after the current one there is a good chance (no guarantee!) that it's saved next to it on the heap.
Let's assume that whenever your computer loads an item in the cache it also loads the
x
next-in-memory items (cache locality). Then your computer can perform the reference count increment forx+1
items on the same cache!With the shuffled sequence it still loads the next-in-memory items but these aren't the ones next-in-list. So it can't perform the reference-count increment without "really" looking for the next item.
TL;DR: The actual speed depends on what happened before the copy: in what order were these items created and in what order are these in the list.
You can verify this by looking at the
id
:Just to show a short excerpt:
So these objects are really "next to each other on the heap". With
shuffle
they aren't:Which shows these are not really next to each other in memory:
Important note:
I haven't thought this up myself. Most of the informations can be found in the blogpost of Ricky Stewart.
This answer is based on the "official" CPython implementation of Python. The details in other implementations (Jython, PyPy, IronPython, ...) may be different. Thanks @JörgWMittag for pointing this out.
When you shuffle the list items, they have worse locality of reference, leading to worse cache performance.
You might think that copying the list just copies the references, not the objects, so their locations on the heap shouldn't matter. However, copying still involves accessing each object in order to modify the refcount.
As explained by others, it's not just copying the references but also increases the reference counts inside the objects and thus the objects are accessed and the cache plays a role.
Here I just want to add more experiments. Not so much about shuffled vs unshuffled (where accessing one element might miss the cache but get the following elements into the cache so they get hit). But about repeating elements, where later accesses of the same element might hit the cache because the element is still in the cache.
Testing a normal range:
A list of the same size but with just one element repeated over and over again is faster because it hits the cache all the time:
And it doesn't seem to matter what number it is:
Interestingly, it gets even faster when I instead repeat the same two or four elements:
I guess something doesn't like the same single counter increased all the time. Maybe some pipeline stall because each increase has to wait for the result of the previous increase, but this is a wild guess.
Anyway, trying this for even larger numbers of repeated elements:
The output (first column is the number of different elements, for each I test three times and then take the average):
So from about 2.8 seconds for a single (repeated) element it drops to about 2.2 seconds for 2, 4, 8, 16, ... different elements and stays at about 2.2 seconds until the hundred thousands. I think this uses my L2 cache (4 × 256 KB, I have an i7-6700).
Then over a few steps, the times go up to 3.5 seconds. I think this uses a mix of my L2 cache and my L3 cache (8 MB) until that's "exhausted" as well.
At the end it stays at around 3.5 seconds, I guess because my caches don't help with the repeated elements anymore.