I have a Ruby on Rails app I'm trying to containerize so that I can deploy it with Docker:
version: '3.4'
services:
db:
image: postgres
web:
container_name: my_rails_app
build: .
ports:
- "3000:3000"
depends_on:
- db
As part of the existing build process for this application, there currently exists a script which starts up a test version of the application with a bunch of sample data and takes multiple screenshots of various features of the application to be used in help files. This script is integrated into Ruby on Rails' asset pipeline, so it runs as part of the normal asset precompile process (the Dockerfile below is somewhat simplified):
FROM ruby:2.2.10 AS build
COPY . /opt/my_rails_app
RUN bundle install
# Generates static assets, including screenshots
RUN bundle exec rake assets:precompile
RUN bundle install --deployment --without development test assets
FROM ruby:2.2.10 AS production
ENV RAILS_ENV=production
COPY . /opt/my_rails_app
COPY --from=build vendor/bundle vendor/bundle
COPY --from=build public/assets public/assets
CMD ["bundle", "exec", "rails", "server"]
EXPOSE 3000
Now here's my problem: because this build step starts the web server in order to take screenshots of the UI, it needs to be able to connect to the database during the build. (The web server can't run properly without a database connection.)
In the old environment this wasn't an issue, as the production database was installed on the same server the application was so I could just have it connect to localhost. With Docker though, the database is running in a separate container managed by docker-compose. And unfortunately, Docker doesn't seem to want to let me talk to that container during the build:
could not translate host name "db" to address: Name or service not known
I considered just delaying the precompile step until after the deploy; but that would slow down the deploy process for containers significantly and require me to include another 50-100 dependencies in my production container which aren't otherwise used.
I also considered installing a database on the build container, but that seems like it'd slow builds down a lot and might cause issues if the database on the build container doesn't exactly match the one the postgres
image provides.
Is there a way to tell docker to start a container and set up the appropriate network connections during the build process? Or perhaps an alternative solution that doesn't have any of the downsides of the other solutions I already considered above?
This can be done, but it's not very well supported by Docker Compose at the moment.
What you want to do is use Docker's built-in networking features to set up a shared network containing the database container, then attach your application container to that network during the build process.
Without Docker Compose, this would be accomplished by starting the database container, using the
docker network
subcommands to connect it to a docker network, then building the application container withdocker build --network $network_name_here .
.With Docker Compose, this can be accomplished by configuring a static name for your application's default network via its
name
attribute, then using the undocumentednetwork
option to the servicebuild
parameter to tell Compose to use that network for your application container during the build:On its own though, even this isn't enough. That's because by default Docker Compose doesn't start your container's dependencies during the build. To fix this, you'll need to manually start the database container before you run the build:
This will allow the web service to connect to the db service while the web service is being built.
There is the
—network
flag, which I guess will let you run your build container in any of docker’s network modes. You could then create the Postgres container and run the build container in that container’s network.