Python unittest: how to run only part of a test fi

2019-01-30 05:18发布

问题:

I have a test file that contains tests taking quite a lot of time (they send calculations to a cluster and wait for the result). All of these are in specific TestCase class.

Since they take time and furthermore are not likely to break, I'd want to be able to choose whether this subset of tests does or doesn't run (the best way would be with a command-line argument, ie "./tests.py --offline" or something like that), so I could run most of the tests often and quickly and the whole set once in a while, when I have time.

For now, I just use unittest.main() to start the tests.

Thanks.

回答1:

The default unittest.main() uses the default test loader to make a TestSuite out of the module in which main is running.

You don't have to use this default behavior.

You can, for example, make three unittest.TestSuite instances.

  1. The "fast" subset.

    fast = TestSuite()
    fast.addTests( TestFastThis )
    fast.addTests( TestFastThat )
    
  2. The "slow" subset.

    slow = TestSuite()
    slow.addTests( TestSlowAnother )
    slow.addTests( TestSlowSomeMore )
    
  3. The "whole" set.

    alltests = unittest.TestSuite([fast, slow])
    

Note that I've adjusted the TestCase names to indicate Fast vs. Slow. You can subclass unittest.TestLoader to parse the names of classes and create multiple loaders.

Then your main program can parse command-line arguments with optparse or argparse (available since 2.7 or 3.2) to pick which suite you want to run, fast, slow or all.

Or, you can trust that sys.argv[1] is one of three values and use something as simple as this

if __name__ == "__main__":
    suite = eval(sys.argv[1])  # Be careful with this line!
    unittest.TextTestRunner().run(suite)


回答2:

To run only a single specific test you can use:

$ python -m unittest test_module.TestClass.test_method

More information here



回答3:

Actually, one can pass the names of the test case as sys.argv and only those cases will be tested.

For instance, suppose you have

class TestAccount(unittest.TestCase):
    ...

class TestCustomer(unittest.TestCase):
    ...

class TestShipping(unittest.TestCase):
    ...

account = TestAccount
customer = TestCustomer
shipping = TestShipping

You can call

python test.py account

to have only account tests, or even

$ python test.py account customer

to have both cases tested



回答4:

You have basically two ways to do it:

  1. Define your own suite of tests for the class
  2. Create mock classes of the cluster connection that will return actual data.

I am a strong proponent of he second approach; a unit test should test only a very unit of code, and not complex systems (like databases or clusters). But I understand that it is not always possible; sometimes, creating mock ups is simply too expensive, or the goal of the test is really in the complex system.

Back to option (1), you can proceed in this way:

suite = unittest.TestSuite()
suite.addTest(MyUnitTestClass('quickRunningTest'))
suite.addTest(MyUnitTestClass('otherTest'))

and then passing the suite to the test runner:

unittest.TextTestRunner().run(suite)

More information on the python documentation: http://docs.python.org/library/unittest.html#testsuite-objects



回答5:

I'm doing this using a simple skipIf:

import os

SLOW_TESTS = int(os.getenv('SLOW_TESTS', '0'))

@unittest.skipIf(not SLOW_TESTS, "slow")
class CheckMyFeature(unittest.TestCase):
    def runTest(self):
        …

This way I need only decorate an already existing test case with this single line (no need to create test suites or similar, just that one os.getenv() call line in the beginning of my unit test file), and as a default this test gets skipped.

If I want to execute it despite being slow, I just call my script like this:

SLOW_TESTS=1 python -m unittest …


回答6:

Since you use unittest.main() you can just run python tests.py --help to get the documentation:

Usage: tests.py [options] [test] [...]

Options:
  -h, --help       Show this message
  -v, --verbose    Verbose output
  -q, --quiet      Minimal output
  -f, --failfast   Stop on first failure
  -c, --catch      Catch control-C and display results
  -b, --buffer     Buffer stdout and stderr during test runs

Examples:
  tests.py                               - run default set of tests
  tests.py MyTestSuite                   - run suite 'MyTestSuite'
  tests.py MyTestCase.testSomething      - run MyTestCase.testSomething
  tests.py MyTestCase                    - run all 'test*' test methods
                                               in MyTestCase

That is, you can simply do

python tests.py TestClass.test_method


回答7:

Or you can make use of the unittest.SkipTest() function. Example, add a skipOrRunTest method to your test class like this:

def skipOrRunTest(self,testType):
    #testsToRun = 'ALL'
    #testsToRun = 'testType1, testType2, testType3, testType4,...etc'
    #testsToRun = 'testType1'
    #testsToRun = 'testType2'
    #testsToRun = 'testType3'
    testsToRun = 'testType4'              
    if ((testsToRun == 'ALL') or (testType in testsToRun)):
        return True 
    else:
        print "SKIPPED TEST because:\n\t testSuite '" + testType  + "' NOT IN testsToRun['" + testsToRun + "']" 
        self.skipTest("skipppy!!!")

Then add a call to this skipOrRunTest method to the very beginning of each of your unit tests like this:

def testType4(self):
    self.skipOrRunTest('testType4')


回答8:

I found another solution, based on how the unittest.skip decorator works. By setting the __unittest_skip__ and __unittest_skip_why__.

Label-based

I wanted to apply a labeling system, to label some tests as quick, slow, glacier, memoryhog, cpuhog, core, and so on.

Then run all 'quick' tests, or run everything except 'memoryhog' tests, your basic whitelist / blacklist setup

Implementation

I implemented this in 2 parts:

  1. First add labels to tests (via a custom @testlabel class decorator)
  2. Custom unittest.TestRunner to identify which tests to skip, and modify the testlist content before executing.

Working implementation is in this gist: https://gist.github.com/fragmuffin/a245f59bdcd457936c3b51aa2ebb3f6c

(a fully working example was too long to put here)

The result being...

$ ./runtests.py --blacklist foo
test_foo (test_things.MyTest2) ... ok
test_bar (test_things.MyTest3) ... ok
test_one (test_things.MyTests1) ... skipped 'label exclusion'
test_two (test_things.MyTests1) ... skipped 'label exclusion'

----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK (skipped=2)

All MyTests1 class tests are skipped because it has the foo label.

--whitelist also works



回答9:

Look into using a dedicated testrunner, like py.test, nose or possibly even zope.testing. They all have command line options for selecting tests.

Look for example as Nose: https://pypi.python.org/pypi/nose/1.3.0



回答10:

I tried @slott's answer:

if __name__ == "__main__":
    suite = eval(sys.argv[1])  # Be careful with this line!
    unittest.TextTestRunner().run(suite)

But that gave me the following error:

Traceback (most recent call last):
  File "functional_tests.py", line 178, in <module>
    unittest.TextTestRunner().run(suite)
  File "/usr/lib/python2.7/unittest/runner.py", line 151, in run
    test(result)
  File "/usr/lib/python2.7/unittest/case.py", line 188, in __init__
    testMethod = getattr(self, methodName)
TypeError: getattr(): attribute name must be string

The following worked for me:

if __name__ == "__main__":
    test_class = eval(sys.argv[1])
    suite = unittest.TestLoader().loadTestsFromTestCase(test_class)
    unittest.TextTestRunner().run(suite)


回答11:

I have found another way to select the test_* methods that I only want to run by adding an attribute to them. You basically use a metaclass to decorate the callables inside the TestCase class that have the StepDebug attribute with a unittest.skip decorator. More info on

Skipping all unit tests but one in Python by using decorators and metaclasses

I don't know if it is a better solution than those above I am just providing it as an option.



回答12:

Haven't found a nice way to do this before, so sharing here.

Goal: Get a set of test files together so they can be run as a unit, but we can still select any one of them to run by itself.

Problem: the discover method does not allow easy selection of a single test case to run.

Design: see below. This flattens the namespace so can select by TestCase class name, and leave off the the "tests1.test_core" prefix:

./run-tests TestCore.test_fmap

Code

  test_module_names = [
    'tests1.test_core',
    'tests2.test_other',
    'tests3.test_foo',
    ]

  loader = unittest.defaultTestLoader
  if args:
    alltests = unittest.TestSuite()
    for a in args:
      for m in test_module_names:
        try:
          alltests.addTest( loader.loadTestsFromName( m+'.'+a ) )
        except AttributeError as e:
          continue
  else:
    alltests = loader.loadTestsFromNames( test_module_names )

  runner = unittest.TextTestRunner( verbosity = opt.verbose )
  runner.run( alltests )


回答13:

This is the only thing that worked for me.

if __name__ == '__main__':
unittest.main( argv=sys.argv, testRunner = unittest.TextTestRunner(verbosity=2))

When I called it though I had to pass in the name of the class and test name. A little inconvenient since I don't have class and test name combination memorized.

python ./tests.py class_Name.test_30311

Removing the Class name and test name runs all the tests in your file. I find this MUCH more easier to deal with then the built in method since I don't really change my command on the CLI. Just add the parameter.

Enjoy, Keith