I'm using the declarative pipeline syntax to do some CI work inside a docker container.
I've noticed that the Docker plugin for Jenkins runs a container using the user id and group id of the jenkins user in the host (ie if the jenkins user has user id 100 and group id 111 it would run the pipeline creating a container with the command docker run -u 100:111 ...
).
I had some problems with this, as the container will run with a non existing user (particularly I ran into some issues with the user not having a home dir). So I thought of creating a Dockerfile that will receive the user id and group id as build arguments and creater a proper jenkins user inside the container. The Dockerfile looks like this:
FROM ubuntu:trusty
ARG user_id
ARG group_id
# Add jenkins user
RUN groupadd -g ${group_id} jenkins
RUN useradd jenkins -u ${user_id} -g jenkins --shell /bin/bash --create-home
USER jenkins
...
The dockerfile agent has an additionalBuildArgs
property, so I can read the user id and group id of the jenkins user in the host and send those as build aguments, but the problem I have now is that it seems that there is no way of executing those commands in a declarative pipeline before specifying the agent. I want my Jenkinsfile to be something like this:
// THIS WON'T WORK
def user_id = sh(returnStdout: true, script: 'id -u').trim()
def group_id = sh(returnStdout: true, script: 'id -g').trim()
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg user_id=${user_id} --build-arg group_id=${group_id}"
}
}
stages {
stage('Foo') {
steps {
...
}
}
stage('Bar') {
steps {
...
}
}
stage('Baz') {
steps {
..
}
}
...
}
}
I there is any way to achieve this? I've also tried wrapping the pipeline directive inside a node, but the pipeline needs to be at the root of the file.
I verified that trying to assign user_id and group_id without a node didn't work, as you found, but this worked for me to assign these values and later access them:
Hopefully these will also work in your
additionalBuildArgs
statement.In a comment, you pointed out what is most likely a critical flaw with the approach that figures out the user_id and group_id outside the declarative pipeline before using it to configure the dockerfile: the slave on which it discovers the user_id will not necessarily match up with the slave that it uses to kick off the docker-based build. i don't there is any way around this while also keeping the declarative Jenkinsfile constraint.
You can guarantee one slave for all stages by using a global agent declaration: Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?
But multiple node references with the same label don't guarantee the same workspace: Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?
I believe we found a good way of dealing with this.
We have a Jenkins deployment which runs as a docker instance, I've mapped a volume for /var/jenkins_home and added the .ssh folder to /var/jenkins_home/.ssh
We also run all builds inside docker containers, using the dockerfile agent directive. Sometimes we need to access some of our private composer libraries via git over ssh.
We leverage docker image caching by installing project deps (composer) which means we only rebuild the build containers if our deps change. This means we need to inject an SSH key during docker build.
See these example files:
project/Jenkinsfile
project/Dockerfile
if you have admin access to Jenkins you can add these two script approvals:
in this URI:
http://${jenkins_host:port}/jenkins/scriptApproval/
which will allow you to execute a shell command in the master in this way:
You can also use the args parameter to solve the issue.
As described in Pipeline Syntax:
This is also possible when using dockerfile instead of docker in agent section.
I had the same problem like you and the following lines working fine for me:
You can also add a block like this:
That will allow the container to have the correct user and group ID.