On a CI build server, the local Maven repository fills up the file system repetitively (after a few days). What strategy are others doing to trim the local repository in such a case? -Max
相关问题
- How to resolve Maven exec plugin: classpath too lo
- Maven: How to read the Parent POM version
- How to add more data to be stored in jenkins rest
- VSTS Build - Choose which unit tests run depending
- Is there a maven plugin that can diff files and ou
相关文章
- Best way to manage docker containers with supervis
- Build errors of missing packages in Visual Studio
- Jenkins Pipeline having “Multiple candidate revisi
- Maven directory structure
- Passing the Maven Debug Flag from Hudson
- Should I use xUnitPublisher or xUnitBuilder after
- Hudson and maven-release-plugin
- How to pass a build number within the MultiJob plu
We use especially for this purpose the build-helper plugin. In our company parent pom is the remove-project-artifact goal embedded in the profile for our hudson builds. This way all old versions of this artifact are removed prior to installing the currently build version.
Using removeAll set to true will wipe out all other snapshots except the one your working on. This can be dangerous as it may mean snapshots for a branch will be wiped out as well.
For instance if you have a snapshot 1.0.0.18-SNAPSHOT representing HEAD and snapshot 1.0.1.17-SNAPSHOT representing a branch, running this plugin with 1.0.0.18-SNAPSHOT build will wipe the 1.0.1.17-SNAPSHOt folder.
To get around this scenario the removeAll should be set to false.
The Maven dependency plugin has a purge-local-repository goal that allows you to delete the dependencies for a given project from the local repository, if this is run say once a day on each project the snapshots will not accumulate.
Alternatively there's a more scorched-earth approach you could take. As the problem is typically the timestamped snapshot artifacts, you could use the maven-antrun-plugin to delete all files that match the resource collection pattern.
For example (note this might need some tweaking as I've done it from memory):
In addition to purge-local-repository (which reads to me like a nuclear option, as it only offers an
excludes
configuration as opposed to an explicitincludes
), take a look at the Remove Project Artifact mojo. I'm looking to implement it now, as my exact use case is to clear out large WAR and EAR snapshots that are being built on my CI (and sometimes workstation) machines.We have employed a slightly different (and devious) technique. All artifacts that build "large things" (EARs, WARs, TARs) have their deploy location overriden like so:
This strategy causes the deploy goal to put things in the target directory, which of course is destroyed by the next CLEAN operation. To get even more aggressive, we have a postbuild step that does this:
We employ yet one more strategy, too. In Hudson/Jenkins we provide a settings file to place the .m2 repository in the workspace for the job. This allows us to delete the entire repository before or after the job. It also makes artifacts visible in the workspace which aids in debugging some problems.
How big is the file system? We have 10gb allocated to builds and zap snapshots older than 30 days every night. That seems to work
Are you doing builds every X hours or when code changes? Switching to code changes will reduce the number of artifacts without reducing coverage.
Are you installing all snapshots locally? You don't need to do this in all cases. In most cases, just those snapshots that are actively developed dependancies need to be installed locally.
Are you installing EAR/WAR files locally? You probably don't need them either.
How many workspaces are you keeping? We use hudson and keep only the last 5 builds.
If you're using hudson, you can set up a scheduled job to just delete the entire repository once a day or something like that. I've got a job called
hudson-maven-repo-clean
which has this configuration:rm -rf ~hudson/.m2/repository
0 0 * * *