I don't know what I'm doing wrong, but I simply cannot get docker-compose up
to use the latest image from our registry without first removing the old containers from the system completely. It looks like compose is using the previously started image even though docker-compose pull has fetched a newer image.
I looked at How to get docker-compose to always re-create containers from fresh images? which seemed to be similar to my issue, but none of the provided solutions there work for me, since I'm looking for a solution I can use on the production server and there I don't want to be removing all containers before starting them again (possible data loss?). I would like for compose only to detect the new version of the changed images, pull them and then restart the services with those new images.
I created a simple test project for this in which the only goal is to get a version nr to increase on each new build. The version nr is displayed if I browse to the nginx server that is created (this works as expected locally).
docker version: 1.11.2 docker-compose version: 1.7.1 OS: tested on both CentOS 7 and OS X 10.10 using docker-toolbox
My docker-compose.yml:
version: '2'
services:
application:
image: ourprivate.docker.reg:5000/ourcompany/buildchaintest:0.1.8-dev
volumes:
- /var/www/html
tty: true
nginx:
build: nginx
ports:
- "80:80"
volumes_from:
- application
volumes:
- ./logs/nginx/:/var/log/nginx
php:
container_name: buildchaintest_php_1
build: php-fpm
expose:
- "9000"
volumes_from:
- application
volumes:
- ./logs/php-fpm/:/var/www/logs
on our jenkins server I run the following to build and tag the image
cd $WORKSPACE && PROJECT_VERSION=$(cat VERSION)-dev
/usr/local/bin/docker-compose rm -f
/usr/local/bin/docker-compose build
docker tag ourprivate.docker.reg:5000/ourcompany/buildchaintest ourprivate.docker.reg:5000/ourcompany/buildchaintest:$PROJECT_VERSION
docker push ourprivate.docker.reg:5000/ourcompany/buildchaintest
this seems to be doing what it's supposed to be since I get a new version tag in our repository each time the build completes and the version nr has been bumped.
If I now run
docker-compose pull && docker-compose -f docker-compose.yml up -d
in a folder on my computer, where the contents is only the docker-compose.yml and the necessary Dockerfiles to build the nginx and php services, the output I get is not the latest version number as has been tagged in the registry or is shown in the docker-compose.yml (0.1.8), but the version before that, which is 0.1.7. However the output of the pull command would suggest that a new version of the image was fetched:
Pulling application (ourprivate.docker.reg:5000/ourcompany/buildchaintest:latest)...
latest: Pulling from ourcompany/buildchaintest
Digest: sha256:8f7a06203005ff932799fe89e7756cd21719cccb9099b7898af2399414bfe62a
Status: Downloaded newer image for docker.locotech.fi:5000/locotech/buildchaintest:0.1.8-dev
Only if I run
docker-compose stop && docker-compose rm -f
and then run the docker-compose up
command do I get the new version to show up on screen as expected.
Is this intended behaviour of docker-compose? i.e. should I always do a docker-compose rm -f
before running up
again, even on production servers? Or am I doing something against the grain here, which is why it's not working?
The goal is to have our build process build and create tagged versions of the images needed in a docker-compose.yml, push those to our private registry and then for the "release to production-step" to simply copy the docker-compose.yml to the production server and run a docker-compose pull && docker-compose -f docker-compose.yml up -d
for the new image to start in production. If anyone has tips on this or can point to a best practices tutorial for this kind of setup that would be much appreciated also.
Option
down
resolve this problemI run my compose file:
docker-compose -f docker/docker-compose.yml up -d
then I delete all with
down --rmi all
docker-compose -f docker/docker-compose.yml down --rmi all
I've seen this occur in our 7-8 docker production system. Another solution that worked for me in production was to run
this removes the containers and seems to make 'up' create new ones from the latest image.
This doesn't yet solve my dream of down+up per EACH changed container (serially, less down time), but it works to force 'up' to update the containers.
I have extended Abhi's script bit further as below
To get the latest images use docker-compose build --pull
I use below command which is really 3 in 1
This Command will stop the services, pull the latest image and then start the services.
The docker-compose documentation for the 'up' command clearly states that it updates the container should the image be changed since the last 'up' was performed:
So by using 'stop' followed by 'pull' and then 'up' this should therefore avoid issues of lost volumes for the running containers, except of course, for containers whose images have been updated.
I am currently experimenting with this process and will include my results in this comment shortly.
To close this question, what seemed to have worked is indeed running
I.e. remove the containers before running
up
again.What one needs to keep in mind when doing it like this is that data volume containers are removed as well if you just run
rm -f
. In order to prevent that I specify explicitly each container to remove:As I said in my question, I don't know if this is the correct process. But this seems to work for our use case, so until we find a better solution we'll roll with this one.