-->

How do I dynamically trigger downstream builds in

2019-02-08 11:49发布

问题:

We want to dynamically trigger integration tests in different downstream builds in jenkins. We have a parametrized integration test project that takes a test name as a parameter. We dynamically determine our test names from the git repo.

We have a parent project that uses jenkins-cli to start a build of the integration project for each test found in the source code. The parent project and integration project are related via matching fingerprints.

The problem with this approach is that the aggregate test results doesn't work. I think the problem is that the "downstream" integration tests are started via jenkins-cli, so jenkins doesn't realize they are downstream.

I've looked at many jenkins plugins to try to get this working. The Join and Parameterized Trigger plugins don't help because they expect a static list of projects to build. The parameter factories available for Parameterized Trigger won't work either because there's no factory to create an arbitrary list of parameters. The Log Trigger plugin won't work.

The Groovy Postbuild Plugin looks like it should work, but I couldn't figure out how to trigger a build from it.

回答1:

def job = hudson.model.Hudson.instance.getJob("job")
def params = new StringParameterValue('PARAMTEST', "somestring")  
def paramsAction = new ParametersAction(params) 
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause) 
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction) 

This is what finally worked for me.



回答2:

NOTE: The Pipeline Plugin should render this question moot, but I haven't had a chance to update our infrastructure.

To start a downstream job without parameters:

job = manager.hudson.getItem(name)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction)

To start a downstream job with parameters, you have to add a ParametersAction. Suppose Job1 has parameters A and C which default to "B" and "D" respectively. I.e.:

A == "B"
C == "D"

Suppose Job2 has the same A and B parameters, but also takes parameter E which defaults to "F". The following post build script in Job1 will copy its A and C parameters and set parameter E to the concatenation of A's and C's values:

params = []
val = ''
manager.build.properties.actions.each {
    if (it instanceof hudson.model.ParametersAction) {
        it.parameters.each {
            value = it.createVariableResolver(manager.build).resolve(it.name)
            params += it
            val += value
        }
    }
}

params += new hudson.model.StringParameterValue('E', val)
paramsAction = new hudson.model.ParametersAction(params)

jobName = 'Job2'
job = manager.hudson.getItem(jobName)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
def waitingItem = manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
def childFuture = waitingItem.getFuture()
def childBuild = childFuture.get()

hudson.plugins.parameterizedtrigger.BuildInfoExporterAction.addBuildInfoExporterAction(
    manager.build, childProjectName, childBuild.number, childBuild.result
)

You have to add $JENKINS_HOME/plugins/parameterized-trigger/WEB-INF/classes to the Groovy Postbuild plugin's Additional groovy classpath.



回答3:

Since you are already starting the downstream jobs dynamically, how about you wait until they done and copy the test result files (I would archive them on the downstream jobs and then just download the 'build' artifacts) to the parent workspace. You might need to aggregate the files manually, depending if the Test plugin can work with several test result pages. In the post build step of the parent jobs configure the appropriate test plugin.



回答4:

Using the Groovy Postbuild Plugin, maybe something like this will work (haven't tried it)

def job = hudson.getItem(jobname)
hudson.queue.schedule(job)

I am actually surprised that if you fingerprint both jobs (e.g. with the BUILD_TAG variable of the parent job) the aggregated results are not picked up. In my understanding Jenkins simply looks at md5sums to relate jobs (Aggregate downstream test results and triggering via the cli should not affect aggregating results. Somehow, there is something additional going on to maintain the upstream/downstream relation that I am not aware of...



回答5:

This worked for me using "Execute system groovy script"

import hudson.model.*

def currentBuild = Thread.currentThread().executable

def job = hudson.model.Hudson.instance.getJob("jobname")
def params = new StringParameterValue('paramname', "somestring")  
def paramsAction = new ParametersAction(params) 
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause) 
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)


回答6:

Execute this Groovy script

import hudson.model.*
import jenkins.model.*

def build = Thread.currentThread().executable

def jobPattern = "PUTHEREYOURJOBNAME"     
def matchedJobs = Jenkins.instance.items.findAll { job ->
    job.name =~ /$jobPattern/
}
matchedJobs.each { job ->
    println "Scheduling job name is: ${job.name}"
    job.scheduleBuild(1, new Cause.UpstreamCause(build), new ParametersAction([ new StringParameterValue("PROPERTY1", "PROPERTY1VALUE"),new StringParameterValue("PROPERTY2", "PROPERTY2VALUE")]))
}

If you don't need to pass in properties from one build to the other just take the ParametersAction out.

The build you scheduled will have the same "cause" as your initial build. That's a nice way to pass in the "Changes". If you don't need this just do not use new Cause.UpstreamCause(build) in the function call