我一直在寻找一种简单的方法来知道数组的大小字节和字典的对象,像
[ [1,2,3], [4,5,6] ] or { 1:{2:2} }
许多主题说使用pylab,例如:
from pylab import *
A = array( [ [1,2,3], [4,5,6] ] )
A.nbytes
24
但是,这个词典? 我看到很多答案建议使用pysize或heapy的。 :一个简单的答案是由托斯滕马立克在这个环节上给予建议哪一种Python内存分析器? ,但我还没有关于输出,因为字节数不匹配一个明确的解释。
Pysize似乎更加复杂,我没有关于如何尚未使用它一个明确的想法。
鉴于大小计算,我想执行关于简单的方法来获得这种对象的内存使用量的近似估计(不上课,也没有复杂的结构),任何想法的简单?
亲切的问候。
有:
>>> import sys
>>> sys.getsizeof([1,2, 3])
96
>>> a = []
>>> sys.getsizeof(a)
72
>>> a = [1]
>>> sys.getsizeof(a)
80
但我不会说这是可靠的,因为Python有每个对象的开销,并且有包含什么,但对其他对象引用的对象,所以它不是完全一样在C和其他语言。
对文档的读取sys.getsizeof ,并从那里我猜去。
有点迟到了,但一个简单的方法来获得字典的大小是先腌它。
Python对象(包括字典)上使用sys.getsizeof可能不准确,因为它不指望引用的对象。
来处理它的方式是把它序列化到一个字符串,并在字符串中使用sys.getsizeof。 结果将是更接近你想要什么。
import cPickle
mydict = {'key1':'some long string, 'key2':[some, list], 'key3': whatever other data}
做sys.getsizeof(mydict)是不准确的话,首先它咸菜
mydict_as_string = cPickle.dumps(mydict)
现在我们可以知道它占用多少空间由
print sys.getsizeof(mydict_as_string)
这里的答案都不是真正的通用。
下面的解决方案将与任何类型的对象递归的工作,而不需要昂贵的递归实现:
import gc
import sys
def get_obj_size(obj):
marked = {id(obj)}
obj_q = [obj]
sz = 0
while obj_q:
sz += sum(map(sys.getsizeof, obj_q))
# Lookup all the object referred to by the object in obj_q.
# See: https://docs.python.org/3.7/library/gc.html#gc.get_referents
all_refr = ((id(o), o) for o in gc.get_referents(*obj_q))
# Filter object that are already marked.
# Using dict notation will prevent repeated objects.
new_refr = {o_id: o for o_id, o in all_refr if o_id not in marked and not isinstance(o, type)}
# The new obj_q will be the ones that were not marked,
# and we will update marked with their ids so we will
# not traverse them again.
obj_q = new_refr.values()
marked.update(new_refr.keys())
return sz
例如:
>>> import numpy as np
>>> x = np.random.rand(1024).astype(np.float64)
>>> y = np.random.rand(1024).astype(np.float64)
>>> a = {'x': x, 'y': y}
>>> get_obj_size(a)
16816
见我的仓库以获取更多信息,或者干脆安装我的包( objsize ):
$ pip install objsize
然后:
>>> from objsize import get_deep_size
>>> get_deep_size(a)
16816
使用这个配方,从这里采取:
http://code.activestate.com/recipes/577504-compute-memory-footprint-of-an-object-and-its-cont/
from __future__ import print_function
from sys import getsizeof, stderr
from itertools import chain
from collections import deque
try:
from reprlib import repr
except ImportError:
pass
def total_size(o, handlers={}, verbose=False):
""" Returns the approximate memory footprint an object and all of its contents.
Automatically finds the contents of the following builtin containers and
their subclasses: tuple, list, deque, dict, set and frozenset.
To search other containers, add handlers to iterate over their contents:
handlers = {SomeContainerClass: iter,
OtherContainerClass: OtherContainerClass.get_elements}
"""
dict_handler = lambda d: chain.from_iterable(d.items())
all_handlers = {tuple: iter,
list: iter,
deque: iter,
dict: dict_handler,
set: iter,
frozenset: iter,
}
all_handlers.update(handlers) # user handlers take precedence
seen = set() # track which object id's have already been seen
default_size = getsizeof(0) # estimate sizeof object without __sizeof__
def sizeof(o):
if id(o) in seen: # do not double count the same object
return 0
seen.add(id(o))
s = getsizeof(o, default_size)
if verbose:
print(s, type(o), repr(o), file=stderr)
for typ, handler in all_handlers.items():
if isinstance(o, typ):
s += sum(map(sizeof, handler(o)))
break
return s
return sizeof(o)
##### Example call #####
if __name__ == '__main__':
d = dict(a=1, b=2, c=3, d=[4,5,6,7], e='a string of chars')
print(total_size(d, verbose=True))
文章来源: How to know bytes size of python object like arrays and dictionaries? - The simple way