I made a Docker container which is fairly large. When I commit the container to create an image, the image is about 7.8 GB big. But when I export
the container (not save
the image!) to a tarball and re-import it, the image is only 3 GB big. Of course the history is lost, but this OK for me, since the image is "done" in my opinion and ready for deployment.
How can I flatten an image/container without exporting it to the disk and importing it again? And: Is it a wise idea to do that or am I missing some important point?
Now that Docker has released the multi-stage builds in 17.05, you can reformat your build to look like this:
The result will be your build environment layers are cached on the build server, but only a flattened copy will exist in the resulting image that you tag and push.
Note, you would typically reformulate this to have a complex build environment and only copy over a few directories. Here's an example with Go to make a single binary image from source code and a single build command without installing Go on the host and compiling outside of docker:
The go file is a simple hello world:
The build creates both environments, the build environment and the scratch one, and then tags the scratch one:
Looking at the images, only the single binary is in the image being shipped, while the build environment is over 700MB:
And yes, it runs:
Up from Docker 1.13, you can use the
--squash
flag.Before version 1.13:
To my knowledge, you cannot using the Docker api.
docker export
anddocker import
are designed for this scenario, as you yourself already mention.If you don't want to save to disk, you could probably pipe the outputstream of export into the input stream of import. I have not tested this, but try
Build the image with the
--squash
flag:https://docs.docker.com/engine/reference/commandline/build/#squash-an-images-layers-squash-experimental-only
Also consider mopping up unneeded files, such as the apt cache:
RUN
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*