Pretty much every product I've worked on over the years has involved some level of shell scripts (or batch files, PowerShell etc. on Windows). Even though we wrote the bulk of the code in Java or C++, there always seemed to be some integration or install tasks that were better done with a shell script.
The shell scripts thus become part of the shipped code and therefore need to be tested just like the compiled code. Does anyone have experience with some of the shell script unit test frameworks that are out there, such as shunit2 ? I'm mainly interested in Linux shell scripts for now; I'd like to know how well the test harness duplicate the functionality and ease of use of other xUnit frameworks, and how easy it is to integrate with continuous build systems such as CruiseControl or Hudson.
I'm using shunit2 for shell scripts related to a Java/Ruby web application in a Linux environment. It's been easy to use, and not a big departure from other xUnit frameworks.
I have not tried integrating with CruiseControl or Hudson/Jenkins, but in implementing continuous integration via other means I've encountered these issues:
- Exit status: When a test suite fails, shunit2 does not use a nonzero exit status to communicate the failure. So you either have to parse the shunit2 output to determine pass/fail of a suite, or change shunit2 to behave as some continuous integration frameworks expect, communicating pass/fail via exit status.
- XML logs: shunit2 does not produce a JUnit-style XML log of results.
Wondering why nobody mentioned BATS. It's up-to-date and TAP-compliant.
Describe:
#!/usr/bin/env bats
@test "addition using bc" {
result="$(echo 2+2 | bc)"
[ "$result" -eq 4 ]
}
Run:
$ bats addition.bats
✓ addition using bc
1 tests, 0 failures
Roundup by @blake-mizerany sounds great, and I should make use of it in the future, but here is my "poor-man" approach for creating unit tests:
- Separate everything testable as a function.
- Move functions into an external file, say
functions.sh
and source
it into the script. You can use source `dirname $0`/functions.sh
for this purpose.
At the end of functions.sh
, embed your test cases in the below if condition:
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
fi
Your tests are literal calls to the functions followed by simple checks for exit codes and variable values. I like to add a simple utility function like the below to make it easy to write:
function assertEquals()
{
msg=$1; shift
expected=$1; shift
actual=$1; shift
if [ "$expected" != "$actual" ]; then
echo "$msg EXPECTED=$expected ACTUAL=$actual"
exit 2
fi
}
Finally, run functions.sh
directly to execute the tests.
Here is a sample to show the approach:
#!/bin/bash
function adder()
{
return $(($1+$2))
}
(
[[ "${BASH_SOURCE[0]}" == "${0}" ]] || exit 0
function assertEquals()
{
msg=$1; shift
expected=$1; shift
actual=$1; shift
/bin/echo -n "$msg: "
if [ "$expected" != "$actual" ]; then
echo "FAILED: EXPECTED=$expected ACTUAL=$actual"
else
echo PASSED
fi
}
adder 2 3
assertEquals "adding two numbers" 5 $?
)
Roundup:
http://bmizerany.github.com/roundup/
There is a link to an article in the README explaining it in detail.
In addition to roundup and shunit2 my overview of shell unit testing tools also included assert.sh and shelltestrunner.
I mostly agree with roundup author's critique of shunit2 (some of it subjective), so I excluded shunit2 after looking at the documentation and examples. Although, it did look familiar having some experience with jUnit.
In my opinion shelltestrunner is the most original of the tools I've looked at since it uses simple declarative syntax for test case definition. As usual, any level of abstraction gives some convenience at the cost of some flexibility. Even though, the simplicity is attractive I found the tool too limiting for the case I had, mainly because of the lack of a way to define setup/tearDown actions (for example, manipulate input files before a test, remove state files after a test, etc.).
I was at first a little confused that assert.sh only allows asserting either output or exit status, while I needed both. Long enough to write a couple of test cases using roundup. But I soon found the roundup's set -e
mode inconvenient as non-zero exit status is expected in some cases as a means of communicating the result in addition to stdout, which makes the test case fail in said mode. One of the samples shows the solution:
status=$(set +e ; rup roundup-5 >/dev/null ; echo $?)
But what if I need both the non-zero exit status and the output? I could, of course, set +e
before invocation and set -e
after or set +e
for the whole test case. But that's against the roundup's principle "Everything is an Assertion". So it felt like I'm starting to work against the tool.
By then I've realized the assert.sh's "drawback" of allowing to only assert either exit status or output is actually a non-issue as I can just pass in test
with a compound expression like this
output=$($tested_script_with_args)
status=$?
expected_output="the expectation"
assert_raises "test \"$output\" = \"$expected_output\" -a $status -eq 2"
As my needs were really basic (run a suite of tests, display that all went fine or what failed), I liked the simplicity of assert.sh, so that's what I chose.
After looking for a simple unit test framework for shell that could generate xml results for Jenkins and not really finding anything, I wrote one.
It's on sourceforge - the project's name is jshu.
http://sourceforge.net/projects/jshu
You should try out the assert.sh lib, very handy, easy to use
local expected actual
expected="Hello"
actual="World!"
assert_eq "$expected" "$actual" "not equivalent!"
# => x Hello == World :: not equivalent!