How do I set docker-credential-ecr-login in my PAT

2020-07-14 05:48发布

问题:

I'm using AWS ECR to host a private Dockerfile image, and I would like to use it in GitLab CI.

Accordingly to the documentation I need to set docker-credential-ecr-login to fetch the private image, but I have no idea how to do that before anything else. That's my .gitlab-ci file:

image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest

tests:
  stage: test
  before_script:
    - echo "before_script"
    - apt install amazon-ecr-credential-helper
    - apk add --no-cache curl jq python py-pip
    - pip install awscli
  script:
    - echo "script"
    - bundle install
    - bundle exec rspec
  allow_failure: true # for now as we do not have tests

Thank you.

回答1:

I confirm the feature at stake is not yet available in GitLab CI; however I've recently seen it is possible to implement a generic workaround to run a dedicated CI script within a container taken from a private Docker image.

The template file .gitlab-ci.yml below is adapted from the OP's example, using the Docker-in-Docker approach I suggested in this other SO answer, itself inspired by the GitLab CI doc dealing with dind:

stages:
  - test

variables:
  IMAGE: "0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest"
  REGION: "ap-northeast-1"

tests:
  stage: test
  image: docker:latest
  services:
    - docker:dind
  variables:
    # GIT_STRATEGY: none  # uncomment if "git clone" is unneeded for this job
  before_script:
    - ': before_script'
    - apt install amazon-ecr-credential-helper
    - apk add --no-cache curl jq python py-pip
    - pip install awscli
    - $(aws ecr get-login --no-include-email --region "$REGION")
    - docker pull "$IMAGE"
  script:
    - ': script'
    - |
      docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
        export PS4='+ \e[33;1m($CI_JOB_NAME @ line \$LINENO) \$\e[0m '  # optional
        set -ex
        ## TODO insert your multi-line shell script here ##
        echo \"One comment\"  # quotes must be escaped here
        : A better comment
        echo $PWD  # interpolated outside the container
        echo \$PWD  # interpolated inside the container
        bundle install
        bundle exec rspec
        ## (cont'd) ##
      "
    - ': done'
  allow_failure: true # for now as we do not have tests

This example assumes the Docker $IMAGE contains the /bin/bash binary, and relies on the so-called block style of YAML.

The above template already contains comments, but to be self-contained:

  • You need to escape double quotes if your Bash commands contain them, because the whole code is surrounded by docker run … " and ";
  • You also need to escape local Bash variables (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/bash -c "…" command itself.
  • I replaced the echo "stuff" or so commands with their more effective colon counterpart:

    set -x
    : stuff
    : note that these three shell commands do nothing
    : but printing their args thanks to the -x option.
    

[Feedback is welcome as I can't directly test this config (I'm not an AWS ECR user), but I'm puzzled by the fact the OP's example contained at the same time some apt and apk commands…]

Related remark on a pitfall of set -e

Beware that the following script is buggy:

set -e
command1 && command2
command3

Namely, write instead:

set -e
command1 ; command2
command3

or:

set -e
( command1 && command2 )
command3

To be convinced about this, you can try running:

bash -e -c 'false && true; echo $?; echo this should not be run'
  → 1
  → this should not be run
bash -e -c 'false; true; echo $?; echo this should not be run'
bash -e -c '( false && true ); echo $?; echo this should not be run'


回答2:

From GitLab documentation. In order to interact with your AWS account, the GitLab CI/CD pipelines require both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be defined in your GitLab settings under Settings > CI/CD > Variables. Then add to your before script:

image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest

tests:
  stage: test
  before_script:
    - echo "before_script"
    - apt install amazon-ecr-credential-helper
    - apk add --no-cache curl jq python py-pip
    - pip install awscli
    - $( aws ecr get-login --no-include-email )
  script:
    - echo "script"
    - bundle install
    - bundle exec rspec
  allow_failure: true # for now as we do not have tests

Also, you had a typo is awscli, not awsclir.Then add the builds, tests and push accordingly.



回答3:

I think that you have some sort of logic error in the case. image in the build configuration is a CI scripts runner image, not image you build and deploy.

I think you don't have to use it in any case since it is just an image which has utilities & connections to the GitLab CI & etc. The image shouldn't have any dependencies of your project normally.

Please check examples like this one https://gist.github.com/jlis/4bc528041b9661ae6594c63cd2ef673c to get it more clear how to do it a correct way.



回答4:

I faced the same problem using docker executor mode of gitlab runner.

SSH into the EC2 instance showed that docker-credential-ecr-login was present in /usr/bin/. To pass it to the container I had to mount this package to the gitlab runner container.

gitlab-runner register -n \
--url '${gitlab_url}' \
--registration-token '${registration_token}' \
--template-config /tmp/gitlab_runner.template.toml \
--executor docker \
--tag-list '${runner_name}' \
--description 'gitlab runner for ${runner_name}' \
--docker-privileged \
--docker-image "alpine" \
--docker-disable-cache=true \
--docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \
--docker-volumes "/cache" \
--docker-volumes "/usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login" \
--docker-volumes "/home/gitlab-runner/.docker:/root/.docker"

More information on this thread as well: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1583#note_375018948