Workflow for building, pushing, and testing Docker

2019-03-04 09:04发布

问题:

I am developing a Kubernetes service for deployment in Google Container Egine (GKE). Until recently, I have built Docker images in Google Cloud Shell, but I am hitting quota limits now, because the overall load on the free VM instance where Cloud Shell runs is apparently too high from multiple docker builds and pushes. My experience so far is that after about a week or so of sustained work I face the following error message and have to wait for about two days before the Cloud Shell becomes available again.

Service usage limits temporarily exceeded. Try connecting later.

I have tried to shift my docker builds and pushes onto billable machines (GCE VM instances or GKE cluster nodes), but not to complete success:

  • On a GCE VM instance, Docker is apparently not installed. (Also makes sense.)

  • On a GKE cluster node, Docker is installed and I can (sudo) docker build my image, but docker push (even after gcloud docker) fails with the following error message after few seconds (after pushing a few layers): denied: Access denied

So what is a sustainable development workflow for docker images inside GKE? Am I supposed to install Docker on a VM instance (I hope not) or where else can I hope to docker build, docker push and ultimately kubectl create my service without running into work-stalling quota limits, etc.? (I am using a MacBook as local development machine, and would prefer not to install Docker there either, if I can help it. I.e. I prefer to build docker images in the Cloud.)

UPDATE If I equip a VM instance with a Container-VM image as follows, docker build succeeds, but docker push fails just as on the GKE cluster node before (with denied: Access denied):

gcloud compute images list \
  --project google-containers \
  --no-standard-images
gcloud compute instances create tmp \
  --machine-type g1-small 
  --image container-vm-v20160321 \
  --image-project google-containers
  --zone europe-west1-d

回答1:

The solution consisted of adding the scope storage-rw to the instance (otherwise storage-r applies by default):

gcloud compute images list \
  --project google-containers \
  --no-standard-images
gcloud compute instances create tmp \
  --machine-type g1-small \
  --image container-vm-v20160321 \
  --image-project google-containers \
  --zone europe-west1-d \
  --scopes compute-rw,storage-rw

In addition, I also had to install kubectl (like so) and configure it (like so), so overall this is quite a bite of a hassle. (Also the configuration will have to be updated when the cluster's endpoint changes e.g. after a recreation.)

But I can now use use a dedicated VM instance (such as tmp) for development work on Docker images.

UPDATE Added scope compute-rw, which is necessary e.g. for manipulating GCE addresses (as e.g. in gcloud compute addresses list).