Deploying with Docker into production: Zero downti

2019-04-13 10:14发布

问题:

Im failing to see how it is possible to achieve zero-downtime deployments with Docker.

Let's say I have a PHP container running MyWebApp being served by an Nginx container on the same server. I then change some code, as Docker containers are immutable I have to build/deploy the MyWebApp container again with the code changes. During the time it takes to do this MyWebApp is down for the count...

Previously I would use Ansible or similar to do deploy my code, then symlink the new release directory to the web dir... zero-downtime!

Is it possible to achieve zero downtime deployments with Docker and a single server app?

回答1:

You could do some kind of blue-green deployment with your containers, using nginx upstreams's:

upstream containers {
  server 127.0.0.1:9990;  # blue
  server 127.0.0.1:9991;  # green
}

location ~ \.php$ {
  fastcgi_pass containers;
  ...
}

Then, when deploying your containers, you'll have to alternate between port mappings:

# assuming php-fpm runs on port 9000 inside the container
# current state: green container running, need to deploy blue
# get last app version
docker pull my_app
# remove previous container (was already stopped)
docker rm blue
# start new container
docker run -p 9990:9000 --name blue my_app
# at this point both containers are running and serve traffic
docker stop green
# nginx will detect failure on green and stop trying to send traffic to it

To deploy green, change color name and port mapping.

You might want to fiddle with upstream server entry parameters to make the switchover faster, or use haproxy in your stack and manually (or automatically via management socket) manage backends.

If things go wrong, just docker start the_previous_color and docker stop the_latest_color.

Since you use Ansible, you could use it to orchestrate this process, and even add smoke tests to the mix so a rollback is automatically triggered if something goes wrong.