I've been trying to apply an algorithm to reduce a python list into a smaller one based on a certain criteria. Due to the large volume of the original list, in the order of 100k elements, I tried to itertools for avoiding multiple memory allocations so I came up with this:
reducedVec = [ 'F' if sum( 1 for x in islice(vec, i, i+ratio) if x == 'F' )
> ratio / 3.0 else 'T'
for i in xrange(0, len(vec), ratio) ]
Execution time for this takes a worryingly long time in the order of a few minutes, when vec has around 100k elements. When I tried instead:
reducedVec = [ 'F' if sum( 1 for x in vec[i:i+ratio] if x == 'F' )
> ratio / 3.0 else 'T'
for i in xrange(0, len(vec), ratio) ]
in essence replace islice with a slice the execution is instantaneous.
Can you think of a plausible explanation for this? I would have thought that avoiding to repeatedly allocate a new list with a substantial number of elements, would actually save me a few computational cycles instead of crippling the whole execution.
Cheers, Themis
islice
works with arbitrary iterables. To do this, rather than jumping straight to the nth element, it has to iterate over the first n-1, throwing them away, then yield the ones you want.Check out the pure Python implementation from the itertools documentation:
Speaking of the itertools documentation, if I was trying to do this operation, I'd probably use the
grouper
recipe. It won't actually save you any memory, but it could if you rewrote it to be lazier, which wouldn't be tough.I like using
grouper
to abstract away the consecutive slices and find this code a lot easier to read than the originalMy guess would be that using
islice()
involves a Python function call for each element ofvec
, while the extended slice notation is understood by the parser and translates directly to CPython calls.