I have a directory with lots of .py files (say test_1.py, test_2.py and so on) Each one of them is written properly to be used with nose. So when I run nosetests script, it finds all the tests in all the .py files and executes them.
I now want to parallelize them so that all the tests in all .py files are treated as being parallelizable and delegated to worker processes.
It seems that by default, doing :
nosetests --processes=2
introduces no parallelism at all and all tests across all .py files still run in just one process
I tried putting a _multiprocess_can_split_ = True in each of the .py files but that makes no difference
Thanks for any inputs!
It seems that nose, actually the multiprocess plugin, will make test run in parallel. The caveat is that the way it works, you can end up not executing test on multiple processes. The plugin creates a test queue, spawns multiple processes and then each process consumes the queue concurrently. There is no test dispatch for each process thus if your test are executing very fast, they could end up being executed in the same process.
The following example displays this beaviour:
File test1.py
File test2.py
Running nosetests --processes=2 outputs (notice the identical process id)
Now if we add a sleep in one of the test
We get (notice the different process id)