I'm using Pool.map
for a scoring procedure:
- "cursor" with millions of arrays from a data source
- calculation
- save the result in a data sink
The results are independent.
I'm just wondering if I can avoid the memory demand. At first it seems that every array goes into python and then the 2 and 3 are proceed. Anyway I have a speed improvement.
#data src and sink is in mongodb#
def scoring(some_arguments):
### some stuff and finally persist ###
collection.update({uid:_uid},{'$set':res_profile},upsert=True)
cursor = tracking.find(timeout=False)
score_proc_pool = Pool(options.cores)
#finaly I use a wrapper so I have only the document as input for map
score_proc_pool.map(scoring_wrapper,cursor,chunksize=10000)
Am I doing something wrong or is there a better way with python for this purpose?