How can I speed up unpickling large objects if I h

2020-02-04 07:02发布

It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file).

Note that the file quickly loads into memory. In other words, if I run:

import cPickle as pickle

f = open("bigNetworkXGraph.pickle","rb")
binary_data = f.read() # This part doesn't take long
graph = pickle.loads(binary_data) # This takes ages

How can I speed this last operation up?

Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data.

I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

8条回答
smile是对你的礼貌
2楼-- · 2020-02-04 07:46

I had great success in reading a ~750 MB igraph data structure (a binary pickle file) using cPickle itself. This was achieved by simply wrapping up the pickle load call as mentioned here

Example snippet in your case would be something like:

import cPickle as pickle
import gc

f = open("bigNetworkXGraph.pickle", "rb")

# disable garbage collector
gc.disable()

graph = pickle.load(f)

# enable garbage collector again
gc.enable()
f.close()

This definitely isn't the most apt way to do it, however, it reduces the time required drastically.
(For me, it reduced from 843.04s to 41.28s, around 20x)

查看更多
再贱就再见
3楼-- · 2020-02-04 07:46

This is ridiculous.

I have a huge ~150MB dictionary (collections.Counter actually) that I was reading and writing using cPickle in the binary format.

Writing it took about 3 min.
I stopped reading it in at the 16 min mark, with my RAM completely choked up.

I'm now using marshal, and it takes: write: ~3s
read: ~5s

I poked around a bit, and came across this article.
Guess I've never looked at the pickle source, but it builds an entire VM to reconstruct the dictionary?
There should be a note about performance on very large objects in the documentation IMHO.

查看更多
登录 后发表回答