I want not to allow two jobs of the same type (same repository) not to run in parallel on the same node.
How can I do this using groovy inside Jenkinsfile?
I want not to allow two jobs of the same type (same repository) not to run in parallel on the same node.
How can I do this using groovy inside Jenkinsfile?
You got at the disableConcurrentBuilds property:
properties properties: [
...
disableConcurrentBuilds(),
...
]
Then the job would wait the older one to finish first
Another way is to use the Lockable Resources plugin: https://wiki.jenkins-ci.org/display/JENKINS/Lockable+Resources+Plugin
You can define locks (mutexes) however you want and can put variables in the names. E.g. to prevent multiple jobs from using a compiler concurrently on a build node:
stage('Build') {
lock(resource: "compiler_${env.NODE_NAME}", inversePrecedence: true) {
milestone 1
sh "fastlane build_release"
}
}
So if you wanted to prevent more than one job of the same branch running concurrently per node you could do something like
stage('Build') {
lock(resource: "lock_${env.NODE_NAME}_${env.BRANCH_NAME}", inversePrecedence: true) {
milestone 1
sh "fastlane build_release"
}
}
From: https://www.quernus.co.uk/2016/10/19/lockable-resources-jenkins-pipeline-builds/
The answer provided in https://stackoverflow.com/a/43963315/6839445 is deprecated.
The current method to disable concurrent builds is to set options:
options { disableConcurrentBuilds() }
Detailed description is available here: https://jenkins.io/doc/book/pipeline/syntax/#options
I think there are more than just one approach to this problem.
lock
step, as suggested in other answer. Execute concurrent builds if necessary
.node
or label
for each project.1
?Example using options block in the declarative pipeline syntax:
pipeline {
options {
disableConcurrentBuilds()
}
...
}
The "Throttle Concurrent Builds Plugin" now supports pipeline since throttle-concurrents-2.0
. So now you can do something like this:
// Fire me twice, one immediately after the other
// by double-clicking 'Build Now' or from a parallel step in another job.
stage('pre'){
echo "I can run in parallel"
sleep(time: 10, unit:'SECONDS')
}
throttle(['my-throttle-category']) {
// Because only the node block is really throttled.
echo "I can also run in parallel"
node('some-node-label') {
echo "I can only run alone"
stage('work') {
echo "I also can only run alone"
sleep(time: 10, unit:'SECONDS')
}
}
}
stage('post') {
echo "I can run in parallel again"
// Let's wait enough for the next execution to catch
// up, just to illustrate.
sleep(time: 20, unit:'SECONDS')
}
From the pipeline stage view you'll be able to appreciate this:
However, please be advised that this only works for node
blocks within the throttle
block. I do have other pipelines where I first allocate a node, then do some work which doesn't need throttling and then some which does.
node('some-node-label') {
//do some concurrent work
//This WILL NOT work.
throttle(['my-throttle-category']) {
//do some non-concurrent work
}
}
In this case the throttle
step doesn't solve the problem because the throttle
step is the one inside the node
step and not the other way around. In this case the lock step is better suited for the task
Install Jenkins Lockable Resources Plugin.
In your pipeline script wrap the part in the lock block and give this lockable resource a name.
lock("test-server"){
// your steps here
}
Use the name of whatever resource you are locking. In my experience its usually a test server or test database.
One of the options is to use Jenkins REST API. I researched for another options, but seems that this is only one available with pipelines functionality.
You should write script which polls Jenkins for info of current jobs running and check whether job of same type is running. To do this you should use Jenkins REST API, documentation you may find in the right bottom corner in your Jenkins page. Example script:
#!/usr/bin/env bash
# this script waits for integration test build finish
# usage: ./wait-for-tests.sh <jenkins_user_id> <jenkins_user_token_id>
jenkins_user=$1
jenkins_token=$2
build_number=$3
job_name="integration-tests"
branch="develop"
previous_build_number=build_number
let previous_build_number-=1
previous_job_status=$(curl -s http://${jenkins_user}:${jenkins_token}@jenkins.mycompany.com/job/mycompany/job/${job_name}/branch/${branch}/${previous_build_number}/api/json | jq -r '.result')
while [ "$previous_job_status" == "null" ];
do
previous_job_status=$(curl -s http://${jenkins_user}:${jenkins_token}@jenkins.mycompany.com/job/mycompany/job/${job_name}/branch/${branch}/${previous_build_number}/api/json | jq -r '.result')
echo "Waiting for tests completion"
sleep 10
done
echo "Seems that tests are finished."
I've used bash here, but you may use any language. Then just call this script inside of your Jenkinsfile:
sh "./wait-for-tests.sh ${env.REMOTE_USER} ${env.REMOTE_TOKEN} ${env.BUILD_NUMBER}"
So it will wait until job completion (don't be confused with integration-test mentions, it's just job name).
Be also aware that in rare cases this script may cause deadlock when both jobs are waiting for each other, so you may want to implement some max retry policies here instead of infinite waiting.
Until the "Throttle Concurrent Builds" plugin has Pipeline support, a solution would be to effectively run one executor of the master with a label that your job requires.
To do this, create a new node in Jenkins, for example an SSH node that connects to localhost. You could also use the command option to run slave.jar/swarm.jar depending on your setup. Give the node one executor and a label like "resource-foo", and give your job this label as well. Now only one job of label "resource-foo" can run at a time because there is only one executor with that lable. If you set the node to be in use as much as possible (default) and reduce the number of master executors by one, it should behave exactly as desired without a change to total executors.
If you're like my team then you like having user-friendly parameterized Jenkins Jobs that pipeline scripts trigger in stages, instead of maintaining all that declarative/groovy soup. Unfortunately that means that each pipeline build takes up 2+ executor slots (one for the pipeline script and others for the triggered job(s)) so the danger of deadlock becomes very real.
I've looked everywhere for solutions to that dilemma, and disableConcurrentBuilds() only prevents the same job (branch) from running twice. It won't make pipeline builds for different branches queue up and wait instead of taking up precious executor slots.
A hacky (yet surprisingly elegant) solution for us was to limit the master node's executors to 1 and make the pipeline scripts stick to using it (and only it), then hook up a local slave agent to Jenkins in order to take care of all other jobs.