IntelliJ: rerun intermittently failing random test

2019-08-12 06:31发布

问题:

I have a JUnit test class with a number of tests. To increase scenario coverage some data in our tests is randomized, meaning it may take a range of values between individual test runs, for example:

protected MonthlyAmountWithRemainder getMonetaryAmountMultipleOf(int multiplier) {
    BigDecimal monthly = randomBigDecimal(1000);
    BigDecimal multiply = new BigDecimal(multiplier);
    BigDecimal total = monthly.multiply(multiply);
    return new MonthlyAmountWithRemainder(total, monthly, ZERO);
}

Do you see this randomBigDecimal(1000)? that may generate any value between 0 and 1000. We can also randomize date and some other values in the test.

Typically, our tests run just fine, but once in a blue moon (current scenario I'm talking about is once in about 50 times), a test fails without any apparent reason. As you can imagine, such rare failure makes it impossible to debug a test case to find out the reason for the failure or fix it.

So, the question is: is it possible to capture the data generated in the failed test run and re-run the test with exactly the same test data, so that I could debug the failing scenario? In other words, I would like to re-live my previous failed test run. Could that be achieved?

回答1:

Simple: use a seed to seed the random number generator. That seed can be created from the random number generator itself (so that you get different random values for each run). The core point is to then log the seed that gets used.

Because then a "fail repro" boils down to not use a random seed, but that very seed value that caused the test to fail.

Of course: work is required to get things "right" - you want to ensure that you really run the daily tests using different seeds. But you also want to make sure that fixating the seed is trivial, and leads to deterministic results.

Alternatively, whenever random data comes in, you should look into "quickcheck"-based testing approaches (see here for example). It takes a bit of thinking to get into that approach, but it is often worth the time. The idea is that you specify certain properties of your production code - and then the framework generates random data and tries to falsify the properties. And the really nice part: as soon as the framework finds a way to break your properties, it starts searching for a minimal example leading to the problem.



回答2:

Thank you @GhostCat. Here's how we dealt with it in our project:

A test class should extend the following class:

import org.junit.rules.TestRule;
import org.junit.Rule;
public class RandomisedTest {
@Rule
public final TestRule randomisedRule = new RandomisedRule();
}

Then, when you have a test method that uses random values, like:

@Test
public void shouldDoStuff() {
...
}

and it fails, in the failure exception trace there will be a seed number (long, like 1516787460453)

To re-run the test with this same test data, append the following annotation to your test method header:

@Test
@RandomisedRule.Randomised(1516787460453)
public void shouldDoStuff() {
...
}

On re-run the test will use the same test data as in the previous run.

Hope that helps someone!