The Problem:
I am getting the following traceback and don't understand what it means or how to fix it:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python26\lib\multiprocessing\forking.py", line 342, in main
self = load(from_parent)
File "C:\Python26\lib\pickle.py", line 1370, in load
return Unpickler(file).load()
File "C:\Python26\lib\pickle.py", line 858, in load
dispatch[key](self)
File "C:\Python26\lib\pickle.py", line 1083, in load_newobj
obj = cls.__new__(cls, *args)
TypeError: object.__new__(pyodbc.Cursor) is not safe, use pyodbc.Cursor.__new__()
The situation:
I've got a SQL Server database full of data to be processed. I'm trying to use the multiprocessing module to parallelize the work and take advantage of the multiple cores on my computer. My general class structure is as follows:
- MyManagerClass
- This is the main class, where the program starts.
- It creates two multiprocessing.Queue objects, one
work_queue
and onewrite_queue
- It also creates and launches the other processes, then waits for them to finish.
- NOTE: this is not an extension of multiprocessing.managers.BaseManager()
- MyReaderClass
- This class reads the data from the SQL Server database.
- It puts items in the
work_queue
.
- MyWorkerClass
- This is where the work processing happens.
- It gets items from the
work_queue
and puts completed items in thewrite_queue
.
- MyWriterClass
- This class is in charge of writing the processed data back to the SQL Server database.
- It gets items from the
write_queue
.
The idea is that there will be one manager, one reader, one writer, and many workers.
Other details:
I get the traceback twice in stderr, so I'm thinking that it happens once for the reader and once for the writer. My worker processes get created fine, but just sit there until I send a KeyboardInterrupt because they have nothing in the work_queue
.
Both the reader and writer have their own connection to the database, created on initialization.
Solution:
Thanks to Mark and Ferdinand Beyer for their answers and questions that led to this solution. They rightfully pointed out that the Cursor object is not "pickle-able", which is the method that multiprocessing uses to pass information between processes.
The issue with my code was that MyReaderClass(multiprocessing.Process)
and MyWriterClass(multiprocessing.Process)
both connected to the database in their __init__()
methods. I created both these objects (i.e. called their init method) in MyManagerClass
, then called start()
.
So it would create the connection and cursor objects, then try to send them to the child process via pickle. My solution was to move the instantiation of the connection and cursor objects to the run() method, which isn't called until the child process is fully created.