I am reading a 800 Mb CSV file with pandas.read_csv
, and then use the original Python pickle.dump(datfarame)
to save it. The result is a 4 Gb pkl file, so the CSV size is multiplied by 5.
I expected pickle to compress data rather than extend it. Also because I can do a gzip on the CSV file which compress it to 200 Mb, dividing it by 4.
I am willing to accelerate the loading time of my program, and thought that pickling would help, but considering disk access is the main bottleneck I am understanding that I would rather have to compress the files and then use the compression option from pandas.read_csv
to speed up the loading time.
Is that correct?
Is it normal that pickling pandas dataframe extend the data size?
How do you speed up loading time usually?
What are the data-size limit would you load with pandas?
You can also use panda's pickle methods which should compress your data.
Save a dataframe:
Load it:
Dont load
800MB
file to memory. It will increase your loading time. Pickle objects too takes more time to load. Instead store the csv file as a sqlite3 (which comes along with python) table. And then query the table every time depending upon your need.It is likely in your best interest to stash your CSV file in a database of some sort and perform operations on that rather than loading the CSV file to RAM, as Kathirmani suggested. You will see the speedup in loading time that you expect due simply to the fact that you are not filling up 800 Mb worth of RAM every time you load your script.
File compression and loading time are two conflicting elements of what you seem to be trying to accomplish. Compressing the CSV file and loading that will take more time; you've now added the extra step of having to decompress the file, which doesn't solve your problem.
Consider a precursory step to ship the data to an
sqlite3
database, as described here: Importing a CSV file into a sqlite3 database table using Python.You now have the pleasure of being able to query a subset of your data and quickly load it into a
pandas.DataFrame
for further use, as follows:Conversely, you can use
pandas.DataFrame.to_sql()
to save these for later use.Not sure why you think pickling compresses the data size, pickling creates a string version of your python object so that it can be loaded back as a python object:
The method
to_csv
does support compression as akwarg
,'gzip'
and'bz2'
: