I am playing with docker and plan to use it in a GitLab CI environment to package the current project state to containers and provide running instances to do reviews.
I use a very simple Dockerfile as follows:
FROM php:7.0-apache
RUN sed -i 's!/var/www/html!/var/www/html/public!g' /etc/apache2/sites-available/000-default.conf
COPY . /var/www/html/
Now, as soon as a I a new (empty) file (touch foobar
) to the current directory and call
docker build -t test2 --rm .
again, a full new layer is created, containing all of the code.
If I do not create a new file, the old image seems to be nicely reused.
I have a half-way solution using the following Dockerfile
:
FROM test2:latest
RUN sed -i 's!/var/www/html!/var/www/html/public!g'
/etc/apache2/sites-available/000-default.conf COPY . /var/www/html/
After digging into that issue and switching the storage driver to overlay
, this seems to be what I want - only a few bytes are added as a new layer.
But now I am wondering, how I could integrate this into my CI setup - basically I would need two different Dockerfiles - depending on whether the image already exists or it doesn't.
Is there a better solution for this?
Build your images with same tags or no tags
or
If you use same tags then old images will be untagged and will have "" as name. If you don't tag them then also they will have "" in name.
Now you can schedule below command
This will remove all dangling images containers etc
One suggestion is to use the command
docker image prune
to clean dangling images. This can save you a lot of space. You can run this command regularly in your CI.