I am new to unit testing, and have been scouring the web, trying to figure out how to further automate my unit tests. I am building a registration scheme, and what I want to do is test it with a variety of hard drive serial numbers, app numbers, etc. I want to generate a registration key and then check it to make sure it decodes properly. I am looking to automate running the test (with the integrated Visual Studio Test Environment) with a variety of inputs, and after thousands of runs, find out what % of the tests were successful and unsuccessful. Is this possible? Below is my test method:
[TestMethod]
public void GeneratingValidKeyTest()
{
int numReadDevices;
string appNum = "123";
string hddSerial = "1234567890";
string numDevices = "12";
string regNumber = Registration.GenerateKey(appNum, numDevices, hddSerial);
Assert.IsTrue(Registration.CheckKey(regNumber, appNum, out numReadDevices, hddSerial), "Generated key does not pass check.");
Assert.AreEqual(int.Parse(numDevices), numReadDevices,"Number of registered devices does not match requested number");
}
If you use NUnit you can set up a series of ValueSources to feed into your method.
If you have a separate value source for appNum, hddSerial and numDevices, you'll get appNum * hddSerial * numDevices number of tests.
You shouldn't be aiming to find a percentage of tests pass, though. The purpose of unit testing is to ensure that all test scenarios pass.
To take Max's example and make it ValueSources:
You can use Microsoft's Unit Test Framework and make it read test data from a data source. The advantage of using MSTest is that it'll run on the Express Editions of Visual Studio.
You won't get a percentage of errors though, and I agree with @DanielMann, instead you have to ensure that your tests cover all possibilities, and that they all pass.
So, considering you have done that and now you have a list of cases to test, you can use the code below. It uses the
DataSourceAttribute
:In the Test Explorer window you'll get an output like this (failed tests are shown first):
In NUnit it would look like
Personally, I find very little value in the "throw lots of random data at the method and make sure it all works" approach.
If you give 500 different serial numbers to the method and they all work, that's fine. But what specific scenarios are they testing in your code? If you can't answer that question, you're probably duplicating test scenarios, and more importantly, missing test scenarios.
Instead of throwing test cases at the wall and seeing what sticks, analyze your code and identify the critical success and failure criteria and craft tests that exercise those criteria. This has a side benefit of making your tests more verbose and giving your team members a better idea of what the code is supposed to be doing just by reading the test names. Instead of
GeneratingValidKeyTest
, your tests should be named such that they describe what they're testing.As an example, let's say you're building a calculator. With your approach, you'd toss a ton of addition cases at it -- 1+1, 1+3, 5+30, etc. But it's likely that you'd miss 1+
Int32.MaxValue
. Or maybe you wouldn't try adding negative numbers. Or testing what happens if the input isn't a valid number. And so on.Good tests force you to think through all of these types of scenarios as you're writing them.