I have an deployment task that is executed on my test machine. The purpose is to test the freshly build rpm from jenkins on the very same machine.
Therefore I set up a deploy job in jenkins that executes the following shell lines:
artifact=$(ls build/*.rpm | head -1)
sudo /usr/local/sbin/jenkins-rpm-install $artifact
rm -rf build/
To install the rpm I made a small shell script that jenkins has exclusive sudo permissions for.
#!/bin/sh
#
# allows jenkins to install rpm as privileged user
#
# add the following line to /etc/sudoers:
# jenkins ALL = NOPASSWD: /usr/local/sbin/jenkins-rpm-install
#
artifact=$1
rpm -vv --install --force $artifact
Now I have the problem: Whenever the rpm install fails jenkins does not recognize the error code and marks the build as success.
Does anyone have an idea how to properly solve this? Also tips to improve this process are welcome.
If within configuration you are using
execute shell
step Jenkins will mark build as failed ifexit code
!=0
.If might be sufficient to alter your script by adding
exit $?
at the end.What about simply checking for rpm error's code in your script and report it to Jenkins yourself?
Or with overload: