I have just started writing some unit tests for a python project I have using unittest
and coverage
. I'm only currently testing a small proportion, but I am trying to work out the code coverage
I run my tests and get the coverage using the following
python -m unittest discover -s tests/
coverage run -m unittest discover -s tests/
coverage report -m
The problem I'm having is that coverage
is telling I have 44% code coverage and is only counting the files that:
were tested in the unit tests (i.e., all the files that were not tested are missing and not in the overall coverage)
were in the libraries in the virtual environment and code coverage of the actual tests too. Surely it should not be including the actual tests in the results?
Furthermore, it says the files that are actually tested in these unit tests only have the first few lines tested (which are in most cases the import statements)
How do I get a more realistic code coverage or is this how it is meant to be?
Add
--source=.
to thecoverage
run line. It will both limit the focus to the current directory, and will search for.py
files that weren't run at all.If you use
nose
as a testrunner instead, the coverage plugin for it provides