It seems that handlers from the logging
module and multiprocessing
jobs do not mix:
import functools
import logging
import multiprocessing as mp
logger = logging.getLogger( 'myLogger' )
handler = logging.FileHandler( 'logFile' )
def worker( x, handler ) :
print x ** 2
pWorker = functools.partial( worker, handler=handler )
#
if __name__ == '__main__' :
pool = mp.Pool( processes=1 )
pool.map( pWorker, range(3) )
pool.close()
pool.join()
Out:
cPickle.PicklingError: Can't pickle <type 'thread.lock'>: attribute lookup thread.lock failed
If I replace pWorker
be either one of the following methods, no error is raised
# this works
def pWorker( x ) :
worker( x, handler )
# this works too
pWorker = functools.partial( worker, handler=open( 'logFile' ) )
I don't really understand the PicklingError
. Is it because objects of class logging.FileHandler
are not pickable? (I googled it but didn't find anything)
The
FileHandler
object internally uses athreading.Lock
to synchronize writes between threads. However, thethread.lock
object returned bythreading.Lock
can't be pickled, which means it can't be sent between processes, which is required to send it to the child viapool.map
.There is a section in the
multiprocessing
docs that talks about how logging withmultiprocessing
works here. Basically, you need to let the child process inherit the parent process' logger, rather than trying to explicitly pass loggers or handlers via calls tomap
.Note, though, that on Linux, you can do this:
initializer
/initargs
are used to run a method once in each of the pool's child processes as soon as they start. On Linux this allows the handler to be into the child via inheritance, thanks to the wayos.fork
works. However, this won't work on Windows; because it lacks support foros.fork
, it would still need to picklehandler
to pass it viainitargs
.