Until now, I've used an improvised unit testing procedure - basically a whole load of unit test programs run automatically by a batch file. Although a lot of these explicitly check their results, a lot more cheat - they dump out results to text files which are versioned. Any change in the test results gets flagged by subversion and I can easily identify what the change was. Many of the tests output dot files or some other form that allows me to get a visual representation of the output.
The trouble is that I'm switching to using cmake. Going with the cmake flow means using out-of-source builds, which means that convenience of dumping results out in a shared source/build folder and versioning them along with the source doesn't really work.
As a replacement, what I'd like to do is to tell the unit test tool where to find files of expected results (in the source tree) and get it to do the comparison. On failure, it should provide the actual results and diff listings.
Is this possible, or should I take a completely different approach?
Obviously, I could ignore ctest and just adapt what I've always done to out-of-source builds. I could version my folder-where-all-the-builds-live, for instance (with liberal use of 'ignore' of course). Is that sane? Probably not, as each build would end up with a separate copy of the expected results.
Also, any advice on the recommended way to do unit testing with cmake/ctest gratefuly received. I wasted a fair bit of time with cmake, not because it's bad, but because I didn't understand how best to work with it.
EDIT
In the end, I decided to keep the cmake/ctest side of the unit testing as simple as possible. To test actual against expected results, I found a home for the following function in my library...
bool Check_Results (std::ostream &p_Stream ,
const char *p_Title ,
const char **p_Expected,
const std::ostringstream &p_Actual )
{
std::ostringstream l_Expected_Stream;
while (*p_Expected != 0)
{
l_Expected_Stream << (*p_Expected) << std::endl;
p_Expected++;
}
std::string l_Expected (l_Expected_Stream.str ());
std::string l_Actual (p_Actual.str ());
bool l_Pass = (l_Actual == l_Expected);
p_Stream << "Test: " << p_Title << " : ";
if (l_Pass)
{
p_Stream << "Pass" << std::endl;
}
else
{
p_Stream << "*** FAIL ***" << std::endl;
p_Stream << "===============================================================================" << std::endl;
p_Stream << "Expected Results For: " << p_Title << std::endl;
p_Stream << "-------------------------------------------------------------------------------" << std::endl;
p_Stream << l_Expected;
p_Stream << "===============================================================================" << std::endl;
p_Stream << "Actual Results For: " << p_Title << std::endl;
p_Stream << "-------------------------------------------------------------------------------" << std::endl;
p_Stream << l_Actual;
p_Stream << "===============================================================================" << std::endl;
}
return l_Pass;
}
A typical unit test now looks something like...
bool Test0001 ()
{
std::ostringstream l_Actual;
const char* l_Expected [] =
{
"Some",
"Expected",
"Results",
0
};
l_Actual << "Some" << std::endl
<< "Actual" << std::endl
<< "Results" << std::endl;
return Check_Results (std::cout, "0001 - not a sane test", l_Expected, l_Actual);
}
Where I need a re-usable data-dumping function, it takes a parameter of type std::ostream&
, so it can dump to an actual-results stream.