Docker pipeline's “inside” not working in Jenk

2019-04-12 03:52发布

I'm having issues getting a Jenkins pipeline script to work that uses the Docker Pipeline plugin to run parts of the build within a Docker container. Both Jenkins server and slave run within Docker containers themselves.

Setup

  • Jenkins server running in a Docker container
  • Jenkins slave based on custom image (https://github.com/simulogics/protokube-jenkins-slave) running in a Docker container as well
  • Docker daemon container based on docker:1.12-dind image
  • Slave started like so: docker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave

Basic Docker operations (pull, build and push images) are working just fine with this setup.

(Non-)Goals

  • I want the server to not have to know about Docker at all. This should be a characteristic of the slave/node.
  • I do not need dynamic allocation of slaves or ephemeral slaves. One slave started manually is quite enough for my purposes.
  • Ideally, I want to move away from my custom Docker image for the slave and instead use the inside function provided by the Docker pipeline plugin within a generic Docker slave.

Problem

This is a representative build step that's causing the issue:

image.inside {
    stage ('Install Ruby Dependencies') {
        sh "bundle install"
    }
}

This would cause an error like this in the log:

sh: 1: cannot create /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q@tmp/durable-98bb4c3d/pid: Directory nonexistent

Previously, this warning would show:

71f4de289962-5790bfcc seems to be running inside container 71f4de28996233340c2aed4212248f1e73281f1cd7282a54a36ceeac8c65ec0a but /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q could not be found among []

Interestingly enough, exactly this problem is described in CloudBees documentation for the plugin here https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#docker-workflow-sect-inside:

For inside to work, the Docker server and the Jenkins agent must use the same filesystem, so that the workspace can be mounted. The easiest way to ensure this is for the Docker server to be running on localhost (the same computer as the agent). Currently neither the Jenkins plugin nor the Docker CLI will automatically detect the case that the server is running remotely; a typical symptom would be errors from nested sh commands such as

cannot create /…@tmp/durable-…/pid: Directory nonexistent or negative exit codes.

When Jenkins can detect that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.

Unfortunately, the detection described in the last paragraph doesn't seem to work.

Question

Since both my server and slave are running in Docker containers, what kid of volume mapping do I have to use to make it work?

1条回答
再贱就再见
2楼-- · 2019-04-12 03:57

I've seen variations of this issue, also with the agents powered by the kubernetes-plugin.

I think that for it to work the agent/jnlp container needs to share workspace with the build container.

By build container I am referring to the one that will run the bundle install command.

This could be possibly work via withArgs

The question is why would you want to do that? Most of the pipeline steps are being executed on master anyway and the actual build will run in the build container. What is the purpose of also using an agent?

查看更多
登录 后发表回答