I need to create a Docker image (and consequently containers from that image) that use large files (containing genomic data, thus reaching ~10GB in size).
How am I supposed to optimize their usage? Am I supposed to include them in the container (such as COPY large_folder large_folder_in_container
)? Is there a better way of referencing such files? The point is that it sounds strange to me to push such container (which would be >10GB) in my private repository. I wonder if there is a way of attaching a sort of volume to the container, without packing all those GBs together.
Thank you.
Am I supposed to include them in the container (such as COPY large_folder large_folder_in_container
)?
If you do so, that would include them in the image, not the container: you could launch 20 containers from that image, the actual disk space used would still be 10 GB.
If you were to make another image from your first image, the layered filesystem will reuse the layers from the parent image, and the new image would still be "only" 10GB.
Is there a better way of referencing such files?
If you already have some way to distribute the data I would use a "bind mount" to attach a volume to the containers.
docker run -v /path/to/data/on/host:/path/to/data/in/container <image> ...
That way you can change the image and you won't have to re-download the large data set each time.
If you wanted to use the registry to distribute the large data set, but want to manage changes to the data set separately, you could use a data volume container with a Dockerfile
like this:
FROM tianon/true
COPY dataset /dataset
VOLUME /dataset
From your application container you can attach that volume using:
docker run -d --name dataset <data volume image name>
docker run --volumes-from dataset <image> ...
Either way, I think https://docs.docker.com/engine/tutorials/dockervolumes/ are what you want.