Performance of library itertools compared to pytho

2020-03-24 04:17发布

问题:

As answer to my question Find the 1 based position to which two lists are the same I got the hint to use the C-library itertools to speed up things.

To verify I coded the following test using cProfile:

from itertools import takewhile, izip

def match_iter(self, other):
    return sum(1 for x in takewhile(lambda x: x[0] == x[1],
                                        izip(self, other)))

def match_loop(self, other):
    element = -1
    for element in range(min(len(self), len(other))):
        if self[element] != other[element]:
            element -= 1
            break
    return element +1

def test():
    a = [0, 1, 2, 3, 4]
    b = [0, 1, 2, 3, 4, 0]

    print("match_loop a=%s, b=%s, result=%s" % (a, b, match_loop(a, b)))
    print("match_iter a=%s, b=%s, result=%s" % (a, b, match_iter(a, b)))

    i = 10000
    while i > 0:
        i -= 1
        match_loop(a, b)
        match_iter(a, b)

def profile_test():
    import cProfile
    cProfile.run('test()')

if __name__ == '__main__':
    profile_test()

The function match_iter() is using the itertools and the function match_loop() is the one I had implemented before using plain python.

The function test() defines two lists, prints the lists with the results of the two functions to verify it is working. Both results have the expected value 5 which is the length for the lists being equal. Then it loops 10,000 times over the both functions.

Finally the whole thing is profiled using profile_test().

What I learned than was that izip is not implemented in the itertools of python3, at least not in debian wheezy whitch I am using. So I had run the test with python2.7

Here are the results:

python2.7 match_test.py
match_loop a=[0, 1, 2, 3, 4], b=[0, 1, 2, 3, 4, 0], result=5
match_iter a=[0, 1, 2, 3, 4], b=[0, 1, 2, 3, 4, 0], result=5
         180021 function calls in 0.636 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.636    0.636 <string>:1(<module>)
        1    0.039    0.039    0.636    0.636 match_test.py:15(test)
    10001    0.048    0.000    0.434    0.000 match_test.py:3(match_iter)
    60006    0.188    0.000    0.275    0.000 match_test.py:4(<genexpr>)
    50005    0.087    0.000    0.087    0.000 match_test.py:4(<lambda>)
    10001    0.099    0.000    0.162    0.000 match_test.py:7(match_loop)
    20002    0.028    0.000    0.028    0.000 {len}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
    10001    0.018    0.000    0.018    0.000 {min}
    10001    0.018    0.000    0.018    0.000 {range}
    10001    0.111    0.000    0.387    0.000 {sum}

What makes me wonder is, looking at the cumtime values, my plain python version has a value of 0.162 seconds for 10,000 loops and the match_iter version takes 0.434 seconds.

For one thing python is very fast, great, so I do not have to worry. But can this be correct, that the C-library takes more than twice as long to finish the job as simple python code? Or am I doing a fatal mistake?

To verify I ran the test with python2.6 also, which seems to be even faster, but with the same difference between looping and itertools.

Who is experienced and willing to help?

回答1:

I imagine the issue here is your test lists are tiny - meaning any difference is likely to be minimal, and the cost of creating the iterators is outweighing the gains they give.

In larger tests (where the performance is more likely to matter), the version using sum() will likely outperform the other version.

Also, there is the matter of style - the manual version is longer, and relies on iterating by index, making it less flexible as well.

I would argue the most readable solution would be something like this:

def while_equal(seq, other):
    for this, that in zip(seq, other):
        if this != that:
            return
        yield this

def match(seq, other):
    return sum(1 for _ in while_equal(seq, other))

Interestingly, on my system a slightly modified version of this:

def while_equal(seq, other):
    for this, that in zip(seq, other):
        if this != that:
            return
        yield 1

def match(seq, other):
    return sum(while_equal(seq, other))

Performs better than the pure loop version:

a = [0, 1, 2, 3, 4]
b = [0, 1, 2, 3, 4, 0]

import timeit

print(timeit.timeit('match_loop(a,b)', 'from __main__ import a, b, match_loop'))
print(timeit.timeit('match(a,b)', 'from __main__ import match, a, b'))

Giving:

1.3171300539979711
1.291257290984504

That said, if we improve the pure loop version to be more Pythonic:

def match_loop(seq, other):
    count = 0
    for this, that in zip(seq, other):
        if this != that:
            return count
        count += 1
    return count

This times (using the same method as above) at 0.8548871780512854 for me, significantly faster than any other method, while still being readable. This is probably due to looping by index in the original version, which is generally very slow. I, however, would go for the first version in this post, as I feel it's the most readable.



回答2:

  • First, kudos for actually timing something.
  • Second, readability is usually more important than writing fast code. If your code runs 3x faster, but you spend 2 out of every 3 weeks debugging it, it's not worth your time.
  • Third, you can also use timeit to time small bits of code. I find that approach to be a little easier than using profile. (profile is good for finding bottlenecks though).

itertools is, in general, pretty fast. However, especially in this case, your takewhile is going to slow things down because itertools needs to call a function for every element along the way. Each function call in python has a reasonable amount of overhead associated with it so that might be slowing you down a bit (there's also the cost of creating the lambda function in the first place). Notice that sum with the generator expression also adds a little overhead. Ultimately though, it appears that a basic loop wins in this situation all the time.

from itertools import takewhile, izip

def match_iter(self, other):
    return sum(1 for x in takewhile(lambda x: x[0] == x[1],
                                        izip(self, other)))

def match_loop(self, other):
    cmp = lambda x1,x2: x1 == x2

    for element in range(min(len(self), len(other))):
        if self[element] == other[element]:
            element += 1
        else:
            break

    return element

def match_loop_lambda(self, other):
    cmp = lambda x1,x2: x1 == x2

    for element in range(min(len(self), len(other))):
        if cmp(self[element],other[element]):
            element += 1
        else:
            break

    return element

def match_iter_nosum(self,other):
    element = 0
    for _ in takewhile(lambda x: x[0] == x[1],
                       izip(self, other)):
        element += 1
    return element

def match_iter_izip(self,other):
    element = 0
    for x1,x2 in izip(self,other):
        if x1 == x2:
            element += 1
        else:
            break
    return element



a = [0, 1, 2, 3, 4]
b = [0, 1, 2, 3, 4, 0]

import timeit

print timeit.timeit('match_iter(a,b)','from __main__ import a,b,match_iter')
print timeit.timeit('match_loop(a,b)','from __main__ import a,b,match_loop')
print timeit.timeit('match_loop_lambda(a,b)','from __main__ import a,b,match_loop_lambda')
print timeit.timeit('match_iter_nosum(a,b)','from __main__ import a,b,match_iter_nosum')
print timeit.timeit('match_iter_izip(a,b)','from __main__ import a,b,match_iter_izip')

Notice however, that the fastest version is a hybrid of a loop+itertools. This (explicit) loop over izip also happens to be easier to read (in my opinion). So, we can conclude from this that takewhile is the slow-ish part, not necessarily itertools in general.