Our toolkit has over 15000 JUnit tests, and many tests are known to fail if some other test fails. For example, if the method X.foo() uses functionality from Y.bar() and YTest.testBar() fails, then XTest.testFoo() will fail too. Obviously, XTest.testFoo() can also fail because of problems specific to X.foo().
While this is fine and I still want both tests run, it would be nice if one could annotate a test dependency with XTest.testFoo() pointing to YTest.testBar(). This way, one could immediately see what functionality used by X.foo() is also failing, and what not.
Is there such annotation available in JUnit or elsewhere? Something like:
public XTest {
@Test
@DependsOn(method=org.example.tests.YTest#testBar)
public void testFoo() {
// Assert.something();
}
}
JExample and TestNG have something like that.
I don't know how useful it is, but if you try it, please come back to tell us whether it was useful.
There's a contribution to JUnit that address this: https://github.com/junit-team/junit.contrib/tree/master/assumes
org.junit.FixMethodOrder
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
This goes on top of your Unit test class.
You can name your methods public void step1_methodName etc
You can declare test dependencies in TestNG, the syntax is almost the same as in your example. I don't think JUnit offers something similar.
In behavior driven design library jBehave there's a keyword GivenScenarios
which imports a list of scenarios that are run before the main scenario. This gives an opportunity to define dependencies and have one point of failure. jBehave's logging will tell you if test fails in dependencies or main body section.
There really isn't something like this that I'm aware of. (Edit: you learn something new every day :)) In my opinion, this isn't that bad of a thing (though I can see it being useful, especially when JUnit it being used for other forms of automated tests - e.g., integration tests). Your tests, IMO, aren't in the strictest sense of the word "unit tests" (at least not the test for X#foo()
). Tests for X#foo()
should succeed or fail depending only on the implementation of X#foo()
. It should not be dependent on Y#foo()
.
What I'd do in your position is to mock out Y, and implement something like MockY#foo()
with very simple, controlled behavior, and use it in X#foo()
's tests.
That said, with 15,000 tests, I can see how this would be a pain to refactor. :)