I have several source code repositories containing various bits of code built by Jenkins, with a one-to-one mapping between Jenkins jobs and source repositories. Separate from these, I have a single repository containing Job DSL scripts for creating/updating the Jenkins jobs that build the other repos. The situation looks something like this:
I had hoped to find a way to store the Job DSL scripts inside the individual source repositories, right alongside the code, and have a single seed job that would be triggered by push notifications to any of the other repositories. Unfortunately, it appears that this is currently not well supported (see the accepted answer to this question). Given that, it seems simplest for now to just leave all the DSLs in a single, separate repository, and let push notifications to that repository trigger reprocessing of the groovy scripts. And it all works fine, more-or-less.
That said, I'm slightly concerned that the seed job—when triggered—re-runs all of the DSL scripts, rather than only those that have actually changed. (I'm honestly not sure whether this is a 'problem', per se, I just worry that it could lead to surprising behavior later on.)
Is there some way I can restructure the seed job so that it only re-runs scripts modified by the commit that triggered the build? Or... is this not worth worrying about?
Job DSL will only update a job if the generated config changes. It compares the generated XML with the existing config to see if an update is necessary. See JenkinsJobManagement.java. So if you do not change jobs manually or run malicious plugins that modify the job's config in the background, there should be no problem when re-running scripts.
And Job DSL reuses the Groovy script engine when running multiple scripts, so the performance impact should be low. See DslScriptLoader.groovy.
So if your scripts have no side effects (e.g. making REST calls or modifying the file system), then it should be no problem to re-run scripts even if unnecessary.