GitLab CI docker in docker can't create volume

2019-04-09 23:05发布

I'm using docker in docker to host my containers as they work through the pipeline. The container I create from my code is setup to have a volume to pass in a gcloud key to the container. This works perfectly on my local machine, but on the gitlab-runner it doesn't link correctly.

From reading this appears to be because it links the host to my container, rather than the dind host to my container.

How do I link the directory that is inside dind to my container?

(Also ignore any minor issues with tagging and such, this ci file is very early in development)

GitLab ci below

image: docker:latest
services:
  - docker:dind


variables:
  DOCKER_DRIVER: overlay2
  SPRING_PROFILES_ACTIVE: gitlab-ci
  CONTAINER_TEST_IMAGE: registry.gitlab.com/fdsa
  CONTAINER_RELEASE_IMAGE: registry.gitlab.com/asdf

stages:
  - build_test_image
  - deploy

.docker_login: &docker_login | # This is an anchor
     docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com

build test image:
    stage: build_test_image
    script:
      - *docker_login
      - docker build -t $CONTAINER_TEST_IMAGE .
      - docker push $CONTAINER_TEST_IMAGE

test run:
    stage: deploy
    script:
        - *docker_login
        - mkdir /key
        - echo $GCP_SVC_KEY > /key/application_default_credentials.json
        # BROKEN LINE HERE
        - docker run --rm -v "/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
    tags:
        - docker

2条回答
神经病院院长
2楼-- · 2019-04-09 23:45

Background

Your problem is in the fact that DIND runs ALL containers on your host (or top-level Docker engine) so when you mount a directory to your $CONTAINER_TEST_IMAGE (2nd level Docker) this image in fact runs on your host through the mounted socket and thus the container is looking for that directory on your Docker host.

I've had this same issue mounting tests in containers and solved it through linking volumes between containers.

Solution

In your case I think the docker cp command could solve your need to copy the /key/application_default_credentials.json file to the container.

Something like:

- docker run --name="myContainer" -d $CONTAINER_TEST_IMAGE
- docker cp /key/application_default_credentials.json myContainer::/.config/gcloud/application_default_credentials.json
- docker exec -it myContainer 'run_tests_or_whatever_command'
- docker rm -f myContainer
查看更多
看我几分像从前
3楼-- · 2019-04-09 23:52

The other solution given is perfectly valid but I wanted to share my solution:

Apparently dind will mount the /build directory so subcontainers can "see" its contents. So by placing the key in "./" it is viewable by those containers. I use $(pwd) because docker run doesn't accept ~ or .

test run:
    stage: deploy
    script:
        - *docker_login
        - mkdir ./key
        - echo $GCP_SVC_KEY > ./key/application_default_credentials.json
        - docker run --rm -v "$(pwd)/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
    tags:
        - docker
查看更多
登录 后发表回答