可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have a task which needs to be run on 'most' objects in my database once every some period of time (once a day, once a week, whatever). Basically this means that I have some query that looks like this running in it's own thread.
for model_instance in SomeModel.objects.all():
do_something(model_instance)
(Note that it's actually a filter() not all() but none-the-less I still end up selecting a very large set of objects.)
The problem I'm running into is that after running for a while the thread is killed by my hosting provider because I'm using too much memory. I'm assuming all this memory use is happening because even though the QuerySet
object returned by my query initially has a very small memory footprint it ends up growing as the QuerySet
object caches each model_instance
as I iterate through them.
My question is, "what is the best way to iterate through almost every SomeModel
in my database in a memory efficient way?" or perhaps my question is "how do I 'un-cache' model instances from a django queryset?"
EDIT: I'm actually using the results of the queryset to build a series of new objects. As such, I don't end up updating the queried-for objects at all.
回答1:
So what I actually ended up doing is building something that you can 'wrap' a QuerySet in. It works by making a deepcopy of the QuerySet, using the slice syntax--e.g., some_queryset[15:45]
--but then it makes another deepcopy of the original QuerySet when the slice has been completely iterated through. This means that only the set of Objects returned in 'this' particular slice are stored in memory.
class MemorySavingQuerysetIterator(object):
def __init__(self,queryset,max_obj_num=1000):
self._base_queryset = queryset
self._generator = self._setup()
self.max_obj_num = max_obj_num
def _setup(self):
for i in xrange(0,self._base_queryset.count(),self.max_obj_num):
# By making a copy of of the queryset and using that to actually access
# the objects we ensure that there are only `max_obj_num` objects in
# memory at any given time
smaller_queryset = copy.deepcopy(self._base_queryset)[i:i+self.max_obj_num]
logger.debug('Grabbing next %s objects from DB' % self.max_obj_num)
for obj in smaller_queryset.iterator():
yield obj
def __iter__(self):
return self
def next(self):
return self._generator.next()
So instead of...
for obj in SomeObject.objects.filter(foo='bar'): <-- Something that returns *a lot* of Objects
do_something(obj);
You would do...
for obj in MemorySavingQuerysetIterator(in SomeObject.objects.filter(foo='bar')):
do_something(obj);
Please note that the intention of this is to save memory in your Python interpreter. It essentially does this by making more database queries. Usually people are trying to do the exact opposite of that--i.e., minimize database queries as much as possible without regards to memory usage. Hopefully somebody will find this useful though.
回答2:
You can't simply use Model.objects.all().iterator() because it will fetch all the elements on you table at once. You can't also simply go with the Model.objects.all()[offset:offset+pagesize] way, because it will catch your results. Any of those will exceed your memory limit.
I've tried to mix both solutions, and it worked:
offset = 0
pagesize = 1000
count = Model.objects.all().count()
while offset < count:
for m in Model.objects.all()[offset : offset + pagesize].iterator:
do_something with m
offset += pagesize
Change pagesize to fit your requirements, and optionaly change the [offset : offset + pagesize] to the [offset * pagesize : (offset + 1) * pagesize] idiom if it fits you better. Also, of course, replace Model by your actual model name.
回答3:
What about using django core's Paginator and Page objects documented here:
https://docs.djangoproject.com/en/dev/topics/pagination/
Something like this:
from django.core.paginator import Paginator
from djangoapp.models import SomeModel
paginator = Paginator(SomeModel.objects.all(), 1000) # chunks of 1000
for page_idx in range(1, paginator.num_pages):
for row in paginator.page(page_idx).object_list:
# here you can do what you want with the row
print "done processing page %s" % page_idx
回答4:
Many solutions implement sql OFFSET
and LIMIT
via slicing the queryset. As stefano notes, with larger datasets this becomes very inefficient. The proper way of handling this is to use server-side cursers to keep track of the OFFSET.
Native server-side cursor support is in the works for django. Until it's ready, here is a simple implementation if you are using postgres with the psycopg2 backend:
def server_cursor_query(Table):
table_name = Table._meta.db_table
# There must be an existing connection before creating a server-side cursor
if connection.connection is None:
dummy_cursor = connection.cursor() # not a server-side cursor
# Optionally keep track of the columns so that we can return a QuerySet. However,
# if your table has foreign keys, you may need to rename them appropriately
columns = [x.name for x in Table._meta.local_fields]
cursor = connection.connection.cursor(name='gigantic_cursor')) # a server-side
# cursor
with transaction.atomic():
cursor.execute('SELECT {} FROM {} WHERE id={}'.format(
', '.join(columns), table_name, id))
while True:
rows = cursor.fetchmany(1000)
if not rows:
break
for row in rows:
fields = dict(zip(columns, row))
yield Table(**fields)
See this blog post for a great explanation of memory issues from large queries in django.
回答5:
I'm continuing research and it kind of looks like I want to do the equivalent of an SQL OFFSET and LIMIT, which according to Django Doc's on Limiting Querysets means I want to use the slice syntax, e.g., SomeModel.objects.all()[15:25]
So now I'm thinking maybe something like this is what I'm looking for:
# Figure out the number of objects I can safely hold in memory
# I'll just say 100 for right now
number_of_objects = 100
count = SomeModel.objects.all().count():
for i in xrange(0,count,number_of_objects):
smaller_queryset = SomeModel.objects.all()[i:i+number_of_objects]
for model_instance in smaller_queryset:
do_something(model_instance)
By my reckoning this would make it so that smaller_queryset
would never grow too large.
回答6:
There is a django snippet for this:
http://djangosnippets.org/snippets/1949/
It iterates over a queryset by yielding rows of smaller "chunks" of the original queryset. It ends up using significantly less memory while allowing you to tune for speed. I use it in one of my projects.