Python test framework with support of non-fatal fa

2019-04-09 04:30发布

I'm evaluating "test frameworks" for automated system tests; so far I'm looking for a python framework. In py.test or nose I can't see something like the EXPECT macros I know from google testing framework. I'd like to make several assertions in one test while not aborting the test at the first failure. Am I missing something in these frameworks or does this not work? Does anybody have suggestions for python test framworks usable for automated system tests?

5条回答
贼婆χ
2楼-- · 2019-04-09 05:06

nose will only abort on the first failure if you pass the -x option at the command line.

test.py:

def test1():
    assert False

def test2():
    assert False

without -x option:

C:\temp\py>C:\Python26\Scripts\nosetests.exe test.py
FF
======================================================================
FAIL: test.test1
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\Python26\lib\site-packages\nose-0.11.1-py2.6.egg\nose\case.py", line
183, in runTest
    self.test(*self.arg)
  File "C:\temp\py\test.py", line 2, in test1
    assert False
AssertionError

======================================================================
FAIL: test.test2
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\Python26\lib\site-packages\nose-0.11.1-py2.6.egg\nose\case.py", line
183, in runTest
    self.test(*self.arg)
  File "C:\temp\py\test.py", line 5, in test2
    assert False
AssertionError

----------------------------------------------------------------------
Ran 2 tests in 0.031s

FAILED (failures=2)

with -x option:

C:\temp\py>C:\Python26\Scripts\nosetests.exe test.py -x
F
======================================================================
FAIL: test.test1
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\Python26\lib\site-packages\nose-0.11.1-py2.6.egg\nose\case.py", line
183, in runTest
    self.test(*self.arg)
  File "C:\temp\py\test.py", line 2, in test1
    assert False
AssertionError

----------------------------------------------------------------------
Ran 1 test in 0.047s

FAILED (failures=1)

You might want to consider reviewing the nose documentation.

查看更多
啃猪蹄的小仙女
3楼-- · 2019-04-09 05:08

I was wanting something similar for functional testing that I'm doing using nose. I eventually came up with this:

def raw_print(str, *args):
    out_str = str % args
    sys.stdout.write(out_str)

class DeferredAsserter(object):
    def __init__(self):
        self.broken = False
    def assert_equal(self, expected, actual):
        outstr = '%s == %s...' % (expected, actual)
        raw_print(outstr)
        try:
            assert expected == actual
        except AssertionError:
            raw_print('FAILED\n\n')
            self.broken = True
        except Exception, e:
            raw_print('ERROR\n')
            traceback.print_exc()
            self.broken = True
        else:
            raw_print('PASSED\n\n')

    def invoke(self):
        assert not self.broken

In other words, it's printing out strings indicating if a test passed or failed. At the end of the test, you call the invoke method which actually does the real assertion. It's definitely not preferable, but I haven't seen a Python testing framework that can handle this kind of testing. Nor have I gotten around to figuring out how to write a nose plugin to do this kind of thing. :-/

查看更多
做个烂人
4楼-- · 2019-04-09 05:08

Oddly enough it sounds like you're looking for something like my claft (command line and filter tester). Something like it but far more mature.

claft is (so far) just a toy I wrote to help students with programming exercises. The idea is to provide the exercises with simple configuration files that represent the program's requirements in terms which are reasonably human readable (and declarative rather than programmatic) while also being suitable for automated testing.

claft runs all the defined tests, supplying arguments and inputs to each, checking return codes, and matching output (stdout) and error messages (stderr) against regular expression patterns. It collects all the failures in a list an prints the whole list at the end of each suite.

It does NOT yet do arbitrary dialogs of input/output sequences. So far it just feeds data in then reads all data/errors out. It also doesn't implement timeouts and, in fact, doesn't even capture failed execute attempts. (I did say it's just a toy, so far, didn't I?). I also haven't yet implemented support for Setup, Teardown, and External Check scripts (though I have plans to do so).

Bryan's suggestion of the "robot framework" might be better for your needs; though a quick glance through it suggests that it's considerably more involved than I want for my purposes. (I need to keep things simple enough that students new to programming can focus on their exercises and not spend lots of time fighting with setting up their test harness).

You're welcome to look at claft and use it or derive your own solution there from (it's BSD licensed). Obviously you'd be welcome to contribute back. (It's on [bitbucket]:(http://www.bitbucket.org/) so you can use Mercurial to clone, and fork your own respository ... and submit a "pull request" if you ever want me to look at merging your changes back into my repo).

Then again perhaps I'm misreading your question.

查看更多
狗以群分
5楼-- · 2019-04-09 05:14

Why not (in unittest, but this should work in any framework):

class multiTests(MyTestCase):
    def testMulti(self, tests):
        tests( a == b )
        tests( frobnicate())
        ...

assuming your implemented MyTestCase so that a function is wrapped into

testlist = []
x.testMulti(testlist.append)
assert all(testlist)
查看更多
ら.Afraid
6楼-- · 2019-04-09 05:27

You asked for suggestions so I'll suggest robot framework.

查看更多
登录 后发表回答