可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm using the declarative pipeline syntax to do some CI work inside a docker container.
I've noticed that the Docker plugin for Jenkins runs a container using the user id and group id of the jenkins user in the host (ie if the jenkins user has user id 100 and group id 111 it would run the pipeline creating a container with the command docker run -u 100:111 ...
).
I had some problems with this, as the container will run with a non existing user (particularly I ran into some issues with the user not having a home dir). So I thought of creating a Dockerfile that will receive the user id and group id as build arguments and creater a proper jenkins user inside the container. The Dockerfile looks like this:
FROM ubuntu:trusty
ARG user_id
ARG group_id
# Add jenkins user
RUN groupadd -g ${group_id} jenkins
RUN useradd jenkins -u ${user_id} -g jenkins --shell /bin/bash --create-home
USER jenkins
...
The dockerfile agent has an additionalBuildArgs
property, so I can read the user id and group id of the jenkins user in the host and send those as build aguments, but the problem I have now is that it seems that there is no way of executing those commands in a declarative pipeline before specifying the agent. I want my Jenkinsfile to be something like this:
// THIS WON'T WORK
def user_id = sh(returnStdout: true, script: 'id -u').trim()
def group_id = sh(returnStdout: true, script: 'id -g').trim()
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg user_id=${user_id} --build-arg group_id=${group_id}"
}
}
stages {
stage('Foo') {
steps {
...
}
}
stage('Bar') {
steps {
...
}
}
stage('Baz') {
steps {
..
}
}
...
}
}
I there is any way to achieve this? I've also tried wrapping the pipeline directive inside a node, but the pipeline needs to be at the root of the file.
回答1:
I verified that trying to assign user_id and group_id without a node didn't work, as you found, but this worked for me to assign these values and later access them:
def user_id
def group_id
node {
user_id = sh(returnStdout: true, script: 'id -u').trim()
group_id = sh(returnStdout: true, script: 'id -g').trim()
}
pipeline {
agent { label 'docker' }
stages {
stage('commit_stage') {
steps {
echo 'user_id'
echo user_id
echo 'group_id'
echo group_id
}
}
}
}
Hopefully these will also work in your additionalBuildArgs
statement.
In a comment, you pointed out what is most likely a critical flaw with the approach that figures out the user_id and group_id outside the declarative pipeline before using it to configure the dockerfile: the slave on which it discovers the user_id will not necessarily match up with the slave that it uses to kick off the docker-based build. i don't there is any way around this while also keeping the declarative Jenkinsfile constraint.
You can guarantee one slave for all stages by using a global agent declaration: Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?
But multiple node references with the same label don't guarantee the same workspace: Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?
回答2:
You can also add a block like this:
agent {
dockerfile {
args '-v /etc/passwd:/etc/passwd -v /etc/group:/etc/group'
}
}
That will allow the container to have the correct user and group ID.
回答3:
You can also use the args parameter to solve the issue.
As described in Pipeline Syntax:
docker also optionally accepts an args parameter which may contain arguments to pass directly to a docker run invocation.
This is also possible when using dockerfile instead of docker in agent
section.
I had the same problem like you and the following lines working fine for me:
agent {
dockerfile {
dir 'Docker/kubernetes-cli'
args '-u 0:0' //Forces Container tu run as User Root
reuseNode true
}
}
回答4:
I believe we found a good way of dealing with this.
We have a Jenkins deployment which runs as a docker instance, I've mapped a volume for /var/jenkins_home and added the .ssh folder to /var/jenkins_home/.ssh
We also run all builds inside docker containers, using the dockerfile agent directive.
Sometimes we need to access some of our private composer libraries via git over ssh.
We leverage docker image caching by installing project deps (composer) which means we only rebuild the build containers if our deps change. This means we need to inject an SSH key during docker build.
See these example files:
project/Jenkinsfile
def SSH_KEY
node {
SSH_KEY = sh(returnStdout: true, script: 'cat /var/jenkins_home/.ssh/id_rsa')
}
pipeline {
agent {
dockerfile {
filename 'Dockerfile'
additionalBuildArgs '--build-arg SSH_KEY="' + SSH_KEY + '"'
reuseNode true
}
}
stages {
stage('Fetch Deps') {
steps {
sh 'mv /home/user/app/vendor vendor'
}
}
stage('Run Unit Tests') {
steps {
sh './vendor/bin/phpunit'
}
}
}
}
project/Dockerfile
FROM mycompany/php7.2-common:1.0.2
# Provides the image for building mycompany/project on Jenkins.
WORKDIR /home/user/app
ARG SSH_KEY # should receive a raw SSH private key during build.
ADD composer.json .
RUN add-ssh-key "${SSH_KEY}" ~/.ssh/id_rsa && \
composer install && \
remove-ssh-keys
# Note: add-ssh-key and remove-ssh-keys are our shell scripts put in
# the base image to reduce boilerplate for common tasks.
回答5:
if you have admin access to Jenkins you can add these two script approvals:
staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods execute java.lang.String
staticMethod org.codehaus.groovy.runtime.ProcessGroovyMethods getText java.lang.Process
in this URI: http://${jenkins_host:port}/jenkins/scriptApproval/
which will allow you to execute a shell command in the master in this way:
def user = 'id -u'.execute().text
node {
echo "Hello World ${user}"
}