I am using psycopg2 module in python to read from postgres database, I need to some operation on all rows in a column, that has more than 1 million rows.
I would like to know would cur.fetchall()
fail or cause my server to go down? (since my RAM might not be that big to hold all that data)
q="SELECT names from myTable;"
cur.execute(q)
rows=cur.fetchall()
for row in rows:
doSomething(row)
what is the smarter way to do this?
fetchall()
fetches up to thearraysize
limit, so to prevent a massive hit on your database you can either fetch rows in manageable batches, or simply step through the cursor till its exhausted:The solution Burhan pointed out reduces the memory usage for large datasets by only fetching single rows:
However, I noticed a significant slowdown in fetching rows one-by-one. I access an external database over an internet connection, that might be a reason for it.
Having a server side cursor and fetching bunches of rows proved to be the most performant solution. You can change the sql statements (as in alecxe answers) but there is also pure python approach using the feature provided by psycopg2:
you find more about server side cursors in the psycopg2 wiki
Here is the code to use for simple server side cursor with the speed of
fetchmany
management.The principle is to use named cursor in Psycopg2 and give it a good
itersize
to load many rows at once likefetchmany
would do but with a single loop offor rec in cursor
that does an implicitfetchnone()
.With this code I make queries of 150 millions rows from multi-billion rows table within 1 hour and 200 meg ram.
Consider using server side cursor:
Here's an example: