I'm currently doing a project in python that uses dictionaries that are relatively big (around 800 MB). I tried to store one of this dictionaries by using pickle, but got an MemoryError.
What is the proper way to save this kind of files in python? Should I use a database?
Perhaps you could use sqlite3? Unless you have a real old version of Python, it ought to be available: https://docs.python.org/2/library/sqlite3.html
I have not checked the limitations of sqlite3, and I have no knowledge of its usefulness in your situation, but it would be worth checking out.
When you pickle the entire data structure, you are limited by system RAM. You can, however, do it in chunks.
streaming-pickle
looks like a solution for pickling file-like objects larger than memory on board.https://gist.github.com/hardbyte/5955010
Python-standard shelve module provides dict-like interface for persistent objects. It works with many database backends and is not limited by RAM. The advantage of using
shelve
over direct work with databases is that most of your existing code remains as it was. This comes at the cost of speed (compared to in-RAM dicts) and at the cost of flexibility (compared to working directly with databases).As opposed to
shelf
,klepto
doesn't need to store the entire dict in a single file (using a single file is very slow for read-write when you only need one entry). Also, as opposed toshelf
,klepto
can store almost any type of python object you can put in a dictionary (you can store functions, lambdas, class instances, sockets, multiprocessing queues, whatever).klepto
provides a dictionary abstraction for writing to a database, including treating your filesystem as a database (i.e. writing the entire dictionary to a single file, or writing each entry to it's own file). For large data, I often choose to represent the dictionary as a directory on my filesystem, and have each entry be a file.klepto
also offers a variety of caching algorithms (likemru
,lru
,lfu
, etc) to help you manage your in-memory cache, and will use the algorithm do the dump and load to the archive backend for you.klepto
also provides the use of memory-mapped file backends, for fast read-write. There are other flags such ascompression
that can be used to further customize how your data is stored. It's equally easy (the same exact interface) to use a (MySQL, etc) database as a backend instead of your filesystem. You can use the flagcached=False
to turn off memory caching completely, and directly read and write to and from disk or database.Get
klepto
here: https://github.com/uqfoundationSince it is a dictionary, you can convert it to a list of key value pairs (
[(k, v)]
). You can then serialize each tuple into a string with whatever technology you'd like (like pickle), and store them onto a file line by line. This way, parallelizing processes, checking the file's content etc. is also easier.There are libraries that allows you to stream with single objects, but IMO it just makes it more complicated. Just storing it line by line removes so much headache.