I have a workflow as follows for publishing webapps to my dev server. The server has a single docker host and I'm using docker-compose for managing containers.
- Push changes in my app to a private gitlab (running in docker). The app includes a Dockerfile and docker-compose.yml
- Gitlab triggers a jenkins build (jenkins is also running in docker), which does some normal build stuff (e.g. run test)
- Jenkins then needs to build a new docker image and deploy it using docker-compose.
The problem I have is in step 3. The way I have it set up, the jenkins container has access to the host docker so that running any docker
command in the build script is essentially the same as running it on the host. This is done using the following DockerFile
for jenkins:
FROM jenkins
USER root
# Give jenkins access to docker
RUN groupadd -g 997 docker
RUN gpasswd -a jenkins docker
# Install docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
USER jenkins
and mapping the following volumes to the jenkins container:
-v /var/run/docker.sock:/var/run/docker.sock
-v /usr/bin/docker:/usr/bin/docker
A typical build script in jenkins looks something like this:
docker-compose build
docker-compose up
This works ok, but there are two problems:
It really feels like a hack. But the only other options I've found is to use the docker plugin for jenkins, publish to a registry and then have some way of letting the host know it needs to restart. This is quite a lot more moving parts, and the docker-jenkins plugin required that the docker host is on an open port, which I don't really want to expose.
The jenkins DockerFile includes
groupadd -g 997 docker
which is needed to give the jenkins user access to docker. However, the GID (997) is the GID on the host machine, and is therefore not portable.
I'm not really sure what solution I'm looking for. I can't see any practical way to get around this approach, but it would be nice if there was a way to allow running docker commands inside the jenkins container without having to hard code the GID in the DockerFile. Does anyone have any suggestions about this?
My previous answer was more generic, telling how you can modify the GID inside the container at runtime. Now, by coincidence, someone from my close colleagues asked for a jenkins instance that can do docker development so I created this:
(The parent Dockerfile is the same one I have described in my answer to: Changing the user's uid in a pre-build docker container (jenkins))
To use it, mount both, jenkins_home and docker.sock.
The jenkins process in the container will have the same UID as the mounted host directory. Assuming the docker socket is accessible to the docker group on the host, there is a group created in the container, also named docker, with the same GID.
I ran into the same issues. I ended up giving Jenkins passwordless sudo privileges because of the GID problem. I wrote more about this here: http://container-solutions.com/2015/03/running-docker-in-jenkins-in-docker/
This doesn't really affect security as having docker privileges is effectively equivalent to sudo rights.
Please take a look at this docker file I just posted: https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/gid-from-volume/Dockerfile
Here the GID extracted from a mounted volume (host directory), with
Then the GID of the group of the container user is changed to the same value with
This has to be done as root, but then root privileges are dropped with
Everything is done in the ENTRYPOINT, so the real GID is unknown until you run
Note that after changing the GID, there might be other files in the container no longer accessible for the process, so you might need a
before the gosu command.
You can also change the UID, see my answer here Changing the user's uid in a pre-build docker container (jenkins) and maybe you want to change both to increase security.