python multiprocessing - OverflowError('cannot

2019-04-27 19:38发布

问题:

We are running an script ussing the multiprocessing library (python 3.6), where big pd.DataFrames are passed through a process:

from multiprocessing import Pool
import time 

def f(x):
    # do something time conssuming
    time.sleep(50)

if __name__ == '__main__':
    with Pool(10) as p:
        res = {}
        output = {}
        for id, big_df in some_dict_of_big_dfs:
            res[id] = p.apply_async(f,(big_df ,))
        output = {u : res[id].get() for id in id_list}

The problem is that we are getting an error from the pickle library.

Reason: 'OverflowError('cannot serialize a bytes objects larger than 4GiB',)'

We are aware than pickle v4 can serialize larger objects question related, link, but we don't know how to modify the protocol that multiprocessing is using.

does anybody know what to do? Thanks !!

回答1:

Apparently is there an open issue about this topic (Issue), and there is a few related initiatives described on this particular answer (link). I Found a way to change the default pickle protocol that is used in the multiprocessing library based on this answer (link). As was pointed out in the comments this solution Only works with Linux and OS multiprocessing lib

You first create a new separated module

pickle4reducer.py

from multiprocessing.reduction import ForkingPickler, AbstractReducer

class ForkingPickler4(ForkingPickler):
    def __init__(self, *args):
        if len(args) > 1:
            args[1] = 2
        else:
            args.append(2)
        super().__init__(*args)

    @classmethod
    def dumps(cls, obj, protocol=4):
        return ForkingPickler.dumps(obj, protocol)


def dump(obj, file, protocol=4):
    ForkingPickler4(file, protocol).dump(obj)


class Pickle4Reducer(AbstractReducer):
    ForkingPickler = ForkingPickler4
    register = ForkingPickler4.register
    dump = dump

And then, in your main script you need to add the following:

import pickle4reducer
import multiprocessing as mp
ctx = mp.get_context()
ctx.reducer = pickle4reducer.Pickle4Reducer()

with mp.Pool(4) as p:
    # do something

That will probably solve the problem of the overflow... But, warning, you might consider reading this before doing anything or you might reach same error as me:

'i' format requires -2147483648 <= number <= 2147483647

(the reason of this error is well explained in the link above). In short, multiprocessing send data through all its process using the pickle protocol, if you are already reaching the 4gb limit, that probably means that you might consider redefining your functions more as "void" methods rather than input/output methods. All this inbound/outbound data increase the RAM usage, is probably inefficient by construction (my case) and it might be better to point all process to the same object rather than create a new copy for each call.

hope this helps.