Using Google Test 1.6 (Windows 7, Visual Studio C++). How can I turn off a given test? (aka how can I prevent a test from running). Is there anything I can do besides commenting out the whole test?
标签:
googletest
相关问题
- Google Test - generate values for template class i
- Googletest Parametrized tests crash
- Improving the build times for Google Test test cas
- Add delay to gtest Test Case
- Unit test from google test no longer found after a
相关文章
- How to mock methods return object with deleted cop
- GoogleTest: Accessing the Environment from a Test
- SEH exception with code 0xc0000005 thrown in the t
- find_package does not find GTest which is part of
- How to mock method with optional parameter in Goog
- How to catch an assert with Google test?
- How to compile Google Test using IAR compiler for
- How to pretty-print QString with GoogleTest framew
Here's the expression to include tests whose names have the strings foo1 or foo2 in them and exclude tests whose names have the strings bar1 or bar2 in them:
If more than one test are needed be skipped
You can also run a subset of tests, according to the documentation:
Running a Subset of the Tests
Not the prettiest solution, but it works.
You can now use the
GTEST_SKIP()
macro to conditionally skip a test at runtime. For example:Note that this is a very recent feature so you may need to update your GoogleTest library to use it.
For another approach, you can wrap your tests in a function and use normal conditional checks at runtime to only execute them if you want.
This is useful for me as I'm trying to run some tests only when a system supports dual stack IPv6.
Technically that dualstack stuff shouldn't really be a unit test as it depends on the system. But I can't really make any integration tests until I have tested they work anyway and this ensures that it won't report failures when it's not the codes fault.
As for the test of it I have stub objects that simulate a system's support for dualstack (or lack of) by constructing fake sockets.
The only downside is that the test output and the number of tests will change which could cause issues with something that monitors the number of successful tests.
You can also use ASSERT_* rather than EQUAL_*. Assert will about the rest of the test if it fails. Prevents a lot of redundant stuff being dumped to the console.
I prefer to do it in code:
I can either comment out both lines to run all tests, uncomment out the first line to test a single feature that I'm investigating/working on, or uncomment the second line if a test is broken but I want to test everything else.
You can also test/exclude a suite of features by using wildcards and writing a list, "MyLibrary.TestNetwork*" or "-MyLibrary.TestFileSystem*".