Overhead of creating classes in Python: Exact same

2019-04-28 02:12发布

问题:

I created a Stack class as an exercise in Python, using all list functions. For example, Stack.push() is just list.append(), Stack.pop() is list.pop() and Stack.isEmpty() is just list == [ ].

I was using my Stack class to implement a decimal to binary converter, and what I noticed is that even though the two functions are completely equivalent beyond the wrapping of my Stack class for push(), pop() and isEmpty(), the implementation using the Stack class is twice as slow as the implementation using Python's list.

Is that because there's always an inherent overhead to using classes in Python? And if so, where does the overhead come from technically speaking ("under the hood")? Finally, if the overhead is so significant, isn't it better not to use classes unless you absolutely have to?

def dectobin1(num):
    s = Stack()
    while num > 0:
        s.push(num % 2)
        num = num // 2
    binnum = ''
    while not s.isEmpty():
        binnum = binnum + str(s.pop())
    return binnum

def dectobin2(num):
    l = []
    while num > 0:
        l.append(num % 2)
        num = num // 2
    binnum = ''
    while not l == []:
        binnum = binnum + str(l.pop())
    return binnum


t1 = Timer('dectobin1(255)', 'from __main__ import dectobin1')
print(t1.timeit(number = 1000))

0.0211110115051

t2 = Timer('dectobin2(255)', 'from __main__ import dectobin2')
print(t2.timeit(number = 1000))

0.0094211101532

回答1:

First off, a warning: Function calls are rarely what limits you in speed. This is often an unnecessary micro-optimisation. Only do that, if it is what actually limits your performance. Do some good profiling before and have a look if there might be a better way to optimise.

Make sure you don't sacrifice legibility for this tiny performance tweak!

Classes in Python are a little bit of a hack.

The way it works is that each object has a __dict__ field (a dict) which contains all attributes the object contains. Also each object has a __class__ object which again contains a __dict__ field (again a dict) which contains all class attributes.

So for example have a look at this:

>>> class X(): # I know this is an old-style class declaration, but this causes far less clutter for this demonstration
...     def y(self):
...             pass
...
>>> x = X()
>>> x.__class__.__dict__
{'y': <function y at 0x6ffffe29938>, '__module__': '__main__', '__doc__': None}

If you define a function dynamically (so not in the class declaration but after the object creation) the function does not go to the x.__class__.__dict__ but instead to x.__dict__.

Also there are two dicts that hold all variables accessible from the current function. There is globals() and locals() which include all global and local variables.

So now let's say, you have an object x of class X with functions y and z that was declared in the class declaration and a second function z, which was defined dynamically. Let's say object x is defined in global space. Also, for comparison, there are two functions flocal(), which was defined in local space and fglobal(), which was defined in global space.

Now I will show what happens if you call each of these functions:

flocal():
    locals()["flocal"]()

fglobal():
    locals()["fglobal"] -> not found
    globals()["fglobal"]()

x.y():
    locals()["x"] -> not found
    globals()["x"].__dict__["y"] -> not found, because y is in class space
                  .__class__.__dict__["y"]()

x.z():
    locals()["x"] -> not found
    globals()["x"].__dict__["z"]() -> found in object dict, ignoring z() in class space

So as you see, class space methods take a lot more time to lookup, object space methods are slow as well. The fastest option is a local function.

But you can get around that without sacrificing classes. Lets say, x.y() is called quite a lot and needs to be optimised.

class X():
    def y(self):
        pass

x = X()
for i in range(100000):
    x.y() # slow

y = x.y # move the function lookup outside of loop
for i in range(100000):
    y() # faster

Similar things happen with member variables of objects. They are also slower than local variables. The effect also adds up, if you call a function or use a member variable that is in an object that is a member variable of a different object. So for example

a.b.c.d.e.f()

would be a fair bit slower as each dot needs another dictionary lookup.

An official Python performance guide reccomends to avoid dots in performance critical parts of the code: https://wiki.python.org/moin/PythonSpeed/PerformanceTips



回答2:

There is an inherent overhead using functions (where methods on an instance are just wrappers around functions to pass in self).

A function call requires the current function information (a frame) to be stored on a stack (the Python call stack), and a new frame to be created for the function being called. That all takes time and memory:

>>> from timeit import timeit
>>> def f(): pass
...
>>> timeit(f, number=10**7)
0.8021022859902587

There is also a (smaller) cost of looking up the attribute (methods are attributes too), and creating the method object (each attribute lookup for a method name causes a new method object to be created):

>>> class Foo:
...     bar = None
...     def baz(self): pass
...
>>> timeit('instance.bar', 'from __main__ import Foo; instance = Foo()', number=10**7)
0.238075322995428
>>> timeit('instance.baz', 'from __main__ import Foo; instance = Foo()', number=10**7)
0.3402297169959638

So the sum cost of attribute lookup, method object creation and call stack operations add up to the extra time requirements you observed.