In python2.7, multiprocessing.Queue throws a broken error when initialized from inside a function. I am providing a minimal example that reproduces the problem.
#!/usr/bin/python
# -*- coding: utf-8 -*-
import multiprocessing
def main():
q = multiprocessing.Queue()
for i in range(10):
q.put(i)
if __name__ == "__main__":
main()
throws the below broken pipe error
Traceback (most recent call last):
File "/usr/lib64/python2.7/multiprocessing/queues.py", line 268, in _feed
send(obj)
IOError: [Errno 32] Broken pipe
Process finished with exit code 0
I am unable to decipher why. It would certainly be strange that we cannot populate Queue objects from inside a function.
What happens here is that when you call
main()
, it creates theQueue
, put 10 objects in it and ends the function, garbage collecting all of its inside variables and objects, including theQueue
. BUT you get this error because you are still trying to send the last number in theQueue
.from the documentation documentation :
As the
put()
is made in another Thread, it is not blocking the execution of the script, and allows to ends themain()
function before completing the Queue operations.Try this :
There should be a way to
join
the Queue or block execution until the object is put in theQueue
, you should take a look in the documentation.With a delay using
time.sleep(0.1)
as suggested by @HarryPotFleur, this problem is solved. However, I tested the code with python3 and the broken pipe issue does not happen at all in python3. I think this was reported as a bug and later got fixed.When You fire up Queue.put(), implicit thread is started to deliver data to a queue. Meanwhile, main application is finished and there is no ending station for the data (queue object is garbage-collected).
I would try this:
join_thread()
ensures, all data in the buffer has been flushed.close()
must be called beforejoin_thread()