Python run unittests continuously or each test mul

2019-07-10 16:06发布

问题:

I have wrote unit test cases to test my application. and it is working as expected with out issues.

below is some sample testcase

import os
import unittest

class CreateUser(unittest.TestCase):
    def setUp(self):
        pass

    def tearDown(self):
        pass
    def test_send_message(self):
        #my script goes here
        print "success"
if __name__ == '__main__':
    unittest.main()

If i run this test it executing as expected but I want to run this test case 'N' no of times,

for that i added for loop in main function, also its running only one time, code i used as below

 if __name__ == '__main__':
    for i in range(1, 5):
         unittest.main()  

I also used schedule lib to run test every 10 mins but no luck

Is there any way to run this test case multiple times or any other work around that i am missing or any other continuous build tool to achive this?

Thanks in advance

回答1:

First, a little bit of caution.

Why do you want to run the same test five times? I don't want to make assumptions without seeing your code, but this is a pretty serious code smell. A unit test must be repeatable, and if running it five times in a row does not give the same result as running it once, then something in the test is not repeatable. In particular, if early runs of the test are creating side effects used by later runs, or if there is some kind of random number involved, both of those are very bad situations that need to be repaired rather than running the test multiple times. Given just the information that we have here, it seems very likely that the best advice will be don't run that test multiple times!

Having said that, you have a few different options.

  1. Put the loop inside the test

Assuming there is something meaningful about calling a function five times, it is perfectly reasonable to do something like:

def test_function_calls(self):
    for _ in xrange(1, 5):
        self.assertTrue(f())
  1. Since you mentioned the nose tag, you have some options for parameterized tests, which usually consists of running the same (code) test over different input values. If you used something like https://github.com/wolever/nose-parameterized , then your result might be something like this:
@parameterized.expand([('atest', 'a', 1), ('btest', 'b', 2)])
def test_function_calls(self, name, input, expected):
    self.assertEqual(expected, f(input))

Parameterized tests are, as the name implies, typically for checking one code test with several pieces of data. You can have a list with dummy data if you just want the test to run several times, but that's another suspicious code structure that goes back to my original point.

Bonus side note: almost all "continuous" build tools are set up to trigger builds/tests/etc. on specific conditions or events, like when code is submitted to a repository. It is very unusual for them to simply run tests continuously.

I've done my best to answer your question here, but I feel like something is missing. You may want to clarify exactly what you are trying to accomplish in order to get the best answer.



回答2:

I like to run this kind of thing in a simple bash loop. Of course, this only works if you're using bash:

while true; do python setup.py test ; done


回答3:

Apologies, as a new user I am unable to comment. I just wanted to respond to GrandOpener's concern over your want to re-execute tests. I myself am in a similar situation where I have 'unreliable' tests. The problem as I see it with unreliable or indeterministic tests are that it's very hard to prove you've fixed them as they may only fail 1/100 times.

My thinking was that I would execute the test in question X times, building up a distribution of how many test iterations happened before failure. With that I thought I would either be able to a) use the max iterations before failure as the number of iterations to check post unreliability 'fix' or b) use some fancy statistical methods to show that the reliability 'fix' is Y% likely to have fixed the issue.

With no malice or condescension, how does GrandOpener look to prove his imaginary unreliable tests have been fixed?