可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm trying to set up our build process in hudson.
Job 1 will be a super fast (hopefully) continuous integration build job that will be built frequently.
Job 2, will be responsible for running a comprehensive test suite, at a regular interval or triggered manually.
Job 3 will be responsible for running analysis tools across the codebase (much like Job 2).
I tried using the "Advanced Projects Options > use custom workspace" feature so that code compiled in Job 1 can be used in Job 2 and 3. However, it seems that all build artifacts remain inside that Job 1 workspace. I'm I doing this right? Is there a better way of doing this? I guess I'm looking for something similar to a build pipeline setup...so that things can be shared and the appropriate jobs can be executed in stages.
(I also considered using 'batch tasks'...but it seems like those can't be scheduled? only triggered manually?)
Any suggestions are welcomed. Thanks!
回答1:
You might want to try the Copy Artifact plugin:
http://wiki.hudson-ci.org/display/HUDSON/Copy+Artifact+Plugin
Your continuous job can build the necessary artifacts, and your other two jobs can pull them in to do analysis.
回答2:
Hudson has a plugin for just this problem: http://wiki.hudson-ci.org/display/HUDSON/Clone+Workspace+SCM+Plugin (link currently broken)
The corresponding Jenkins page is here: https://wiki.jenkins-ci.org/display/JENKINS/Clone+Workspace+SCM+Plugin
回答3:
Yes, that wiki page wasn't very helpful in that it tries to make it sound very elegant. The truth is that Hudson doesn't support job chains very elegantly yet if you have to pass stuff from one job to another.
I'm also doing the zip-up-and-copy-workspace method to transfer workspaces from one job to another. I have a quick build, full analysis build, and then distribution builds. In between, I use Ant to generate timestamps and "build-stamps" to mark which job's number built which other job's number. The fingerprinting feature helps keep track of files, but since I'm not going to archive the workspace zips, fingerprinting is useless to the users because they can't actually see the workspace zips.
回答4:
Have you looked at the Hudson wiki? Specifically: Splitting a big job into smaller jobs
回答5:
I had the same issue, and what I ended up going with is separate projects for the long-running tasks. The first step in these projects was to copy all the files from the workspace of Job 1 (i.e. last build) to the Job 2/3/etc workspaces. This usually worked unless Job 1 was building at the time Job 2/3 started, since it would get an incomplete workspace. You could work around this by detecting "end of build" in Job 1 with a sentinel file, or use the Hudson locks plugin (I haven't tried).
You don't have to use a custom workspace if you make assumptions about the placement of the other jobs relative to %WORKSPACE%.
回答6:
I'm doing something like that now. I'd recommend avoiding any attempt to run many jobs in the same shared workspace. I've only had problems with that.
I'm using maven and the free-form projects type. One set of jobs runs when the files in the version control system trigger it. They create local snapshot artifacts. A second set of jobs run nightly and set up a integration test environment then run tests on it.
If you aren't using maven; one option it to set up an area on disk and have the final steps in job one copy the artifacts to that spot. The first steps of job two should be to move those files over. The run whatever you need to run.
As for job three, there are findbugs/checkstyle/pmd et all plugins for Hudson now. I'd recommend just creating a version of job 1 that does a clean nightly checkout and runs those on you code base.
回答7:
Hudson doesn't appear to have a built in repository for build artifacts. Our solution was to create one.
We are in a Windosw environment so I created a share that could be accessed by all Hudson servers (we give the relevant services a common account as the system account cannot access resources across a network).
Within our build scripts (ant), we have tasks that copy resources build from other jobs to the local workspace and jobs that generate artifacts copy them into the common repository.
In other environments, you could publish and fetch via FTP or any other mechanism for moving files.
Simplistic examples of publish and get tasks:
<!-- ==================== Publish ==================================== -->
<target name="Publish" description="Publish files">
<mkdir dir="${publish.dir}/lib" />
<copy todir="${publish.dir}/lib" file="${project.jar}"/>
</target>
and
<!-- ==================== Get ==================================== -->
<target name="getdependencies" description="Get necessary results from published directory">
<copy todir="${support.dir}">
<fileset dir="${publish.dir}/lib">
<include name="*.jar"/>
</fileset>
</copy>
</target>
回答8:
I agree that the current copy files/artifact/workspace between jobs manually is less than elegant.
Also, I found it wasteful space/timewise to have to archive huge tgz/zip files.. In our case, these files were huge (1.5G) and took a long time to pack/archive/fingerprint/unpack.
So I settled with a slightly optimised variant of the same:
- Job 1/2/3 all check out/clone the same source repository, but
- Job 1 only packs files that are actually build artifacts
- with Git makes this easy and fast by
git ls-files -oz
, not sure about others SCMs
- use Copy Artifact plugin to transfer files
- This reduces the those files to a 1/3 size in our case -> speedup, less space wasted