I recently compared the performance of collections.Counter
to sorted
for comparison checks (if some iterable contains the same elements with the same amount) and while the big-iterable performance of Counter
is generally better than sorted
it's much slower for short iterables.
Using line_profiler
the bottleneck seems to be the isinstance(iterable, collections.Mapping)
-check in Counter.update
:
%load_ext line_profiler # IPython
lst = list(range(1000))
%lprun -f Counter.update Counter(lst)
gives me:
Timer unit: 5.58547e-07 s
Total time: 0.000244643 s
File: ...\lib\collections\__init__.py
Function: update at line 581
Line # Hits Time Per Hit % Time Line Contents
==============================================================
581 def update(*args, **kwds):
601 1 8 8.0 1.8 if not args:
602 raise TypeError("descriptor 'update' of 'Counter' object "
603 "needs an argument")
604 1 12 12.0 2.7 self, *args = args
605 1 6 6.0 1.4 if len(args) > 1:
606 raise TypeError('expected at most 1 arguments, got %d' % len(args))
607 1 5 5.0 1.1 iterable = args[0] if args else None
608 1 4 4.0 0.9 if iterable is not None:
609 1 72 72.0 16.4 if isinstance(iterable, Mapping):
610 if self:
611 self_get = self.get
612 for elem, count in iterable.items():
613 self[elem] = count + self_get(elem, 0)
614 else:
615 super(Counter, self).update(iterable) # fast path when counter is empty
616 else:
617 1 326 326.0 74.4 _count_elements(self, iterable)
618 1 5 5.0 1.1 if kwds:
619 self.update(kwds)
So even for length 1000 iterables it takes more than 15% of the time. For even shorter iterables (for example 20 items it increases to 60%).
I first thought it has something to do with how collections.Mapping
uses __subclasshook__
but that method isn't called after the first isinstance
-check anymore. So why is checking isinstance(iterable, Mapping)
so slow?
The performance is really just tied to a collection of checks in ABCMeta's
__instancecheck__
, which is called byisinstance
.The bottom line is that the poor performance witnessed here isn't a result of some missing optimization, but rather just a result of
isinstance
with abstract base classes being a Python-level operation, as mentioned by Jim. Positive and negative results are cached, but even with cached results you're looking at a few microseconds per loop simply to traverse the conditionals in the__instancecheck__
method of the ABCMeta class.An example
Consider some different empty structures.
We can see the performance discrepancy - what accounts for it?
For a dict
For a list
We can see that for a dict, the Mapping abstract classes'
_abc_cache
includes our dict, and so the check short-circuits early. For a list evidently the positive cache won't be hit, however the Mapping's
_abc_negative_cache
contains the list typeas well as now the pd.Series type, as a result of calling
isinstance
more than once with%timeit
. In the case that we don't hit the negative cache (like the first iteration for a Series), Python resorts to the regular subclass check withwhich can be far slower, resorting to the subclass hook and recursive subclass checks seen here, then caches the result for subsequent speedups.