How can I speed up Rails Docker deployments on Goo

2019-01-15 11:00发布

I'm experimenting with more cost effective ways to deploy my Rails apps, and went through the Ruby Starter Projects to get a feel for Google Cloud Platform.

It's almost perfect, and certainly competitive on price, but the deployments are incredibly slow.

When I run the deployment command from the sample Bookshelf app:

$ gcloud preview app deploy app.yaml worker.yaml --promote

I can see a new gae-builder-vm instance on the Compute Engine/VM Instances page and I get the familiar Docker build output - this takes about ten minutes to finish.

If I immediately redeploy, though, I get a new gae-builder-vm spun up that goes through the exact same ten-minute build process with no apparent caching from the first time the image was built.

In both cases, the second module (worker.yaml) gets cached and goes really quickly:

Building and pushing image for module [worker]
---------------------------------------- DOCKER BUILD OUTPUT ----------------------------------------
Step 0 : FROM gcr.io/google_appengine/ruby
---> 3e8b286df835
Step 1 : RUN rbenv install -s 2.2.3 &&     rbenv global 2.2.3 &&     gem install -q --no-rdoc --no-ri bundler --version 1.10.6 &&     gem install -q --no-rdoc --no-ri foreman --version 0.78.0
---> Using cache
---> efdafde40bf8
Step 2 : ENV RBENV_VERSION 2.2.3
---> Using cache
---> 49534db5b7eb
Step 3 : COPY Gemfile Gemfile.lock /app/
---> Using cache
---> d8c2f1c5a44b
Step 4 : RUN bundle install && rbenv rehash
---> Using cache
---> d9f9b57ccbad
Step 5 : COPY . /app/
---> Using cache
---> 503904327f13
Step 6 : ENTRYPOINT bundle exec foreman start --formation "$FORMATION"
---> Using cache
---> af547f521411
Successfully built af547f521411

but it doesn't make sense to me that these versions couldn't be cached between deployments if nothing has changed.

Ideally I'm thinking this would go faster if I triggered a rebuild on a dedicated build server (which could remember Docker images between builds), which then updated a public image file and asked Google to redeploy with the prebuilt image, which would go faster.

Here's the Docker file that was generated by gcloud:

# This Dockerfile for a Ruby application was generated by gcloud with:
# gcloud preview app gen-config --custom

# The base Dockerfile installs:
# * A number of packages needed by the Ruby runtime and by gems
#   commonly used in Ruby web apps (such as libsqlite3)
# * A recent version of NodeJS
# * A recent version of the standard Ruby runtime to use by default
# * The bundler and foreman gems
FROM gcr.io/google_appengine/ruby

# Install ruby 2.2.3 if not already preinstalled by the base image
# base image: https://github.com/GoogleCloudPlatform/ruby-docker/blob/master/appengine/Dockerfile
# preinstalled ruby versions: 2.0.0-p647 2.1.7 2.2.3
RUN rbenv install -s 2.2.3 && \
    rbenv global 2.2.3 && \
    gem install -q --no-rdoc --no-ri bundler --version 1.10.6 && \
    gem install -q --no-rdoc --no-ri foreman --version 0.78.0
ENV RBENV_VERSION 2.2.3

# To install additional packages needed by your gems, uncomment
# the "RUN apt-get update" and "RUN apt-get install" lines below
# and specify your packages.
# RUN apt-get update
# RUN apt-get install -y -q (your packages here)

# Install required gems.
COPY Gemfile Gemfile.lock /app/
RUN bundle install && rbenv rehash

# Start application on port 8080.
COPY . /app/
ENTRYPOINT bundle exec foreman start --formation "$FORMATION"

How can I make this process faster?

1条回答
Fickle 薄情
2楼-- · 2019-01-15 11:39

Well, you're kinda mixing up 2 different cases:

  • re-deploying the exact same app code - indeed Google doesn't check if there was any change in the app to be deployed in which case the entire docker image could be re-used - but you already have that image, effectively you don't even need to re-deploy. Unless you suspect something went wrong and you really insist on re-building the image (and the deployment utility does exactly that). A rather academic case with little bearing to cost-effectiveness of real-life app deployments :)
  • you're deploying a different app code (doesn't matter how much different) - well, short of re-using the cached artifacts during the image building (which happens, according to your build logs) - the final image still needs to be built to incorporate the new app code - unavoidable. Re-using the previously built image is not really possible.

Update: I missed your point earlier, upon a closer look at both your logs I agree with your observation that the cache seems to be local to each build VM (explained by the cache hits only during building the worker modules, each on the same VM where the corresponding default module was built beforehand) and thus not re-used across deployments.

Another Update: there might be a way to get cache hits across deployments...

The gcloud preview app deploy DESCRIPTION indicates that the hosted build could also be done using the Container Builder API (which appears to be the default setting!) in addition to a temporary VM:

To use a temporary VM (with the default --docker-build=remote setting), rather than the Container Builder API to perform docker builds, run:

$ gcloud config set app/use_cloud_build false

Builds done using the Container Builder API might use a shared storage, which might allow cache hits across deployments. IMHO it's worth a try.

查看更多
登录 后发表回答