I have a project that needs to read data, then write in more than 23 CSV files in parallel depending on each line. For example, if the line is about temperature, we should write to temperature.csv, if about humidity, >>to humid.CSV , etc.
I tried the following:
with open('Results\\GHCN_Daily\\MetLocations.csv','wb+') as locations, \
open('Results\\GHCN_Daily\\Tmax.csv','wb+')as tmax_d, \
open('Results\\GHCN_Daily\\Tmin.csv','wb+')as tmin_d, \
open('Results\\GHCN_Daily\\Snow.csv', 'wb+')as snow_d, \
.
.
# total of 23 'open' statements
.
open('Results\\GHCN_Daily\\SnowDepth.csv','wb+')as snwd_d, \
open('Results\\GHCN_Daily\\Cloud.csv', 'wb+')as cloud_d, \
open('Results\\GHCN_Daily\\Evap.csv', 'wb+')as evap_d, \
I got the following error
SystemError: too many statically nested blocks python
I searched for this error, and I get to this post which says that
You will encounter this error when you nest blocks more than 20. This is a design decision of Python interpreter to restrict it to 20.
But the open statement I wrote opens the files in parallel, not nested.
What am I doing wrong, and how can I solve this problem?
Thanks in advance.
If the data is not very huge, why not read in all the data and group the data by categories ( e.g. put all data about temperature into one group ), then write the grouped data into corresponding files at one go?
It would be ok to open >20 files in this way.
though not sure if you really need to do so.
Each open is a nested context, its just that python syntax allows you to put them in a comma-separated list.
contextlib.ExitStack
is a context container that lets you put as many contexts as you like in a stack and exits each of them when you are done. So, you could doIf you find
dict
access less tidy than attribute access, you could create a simple container classi would have a list of possible files = ['humidity','temperature',...]
make a dic that contain the possible file, a dataframe, a path to the file, for example:
afterwards, i wld read whatever doc you are getting the values from and store em on the proper dictionary dataframe.
when finished just save the data on csv, example:
hope it helps