I'm totally new to Docker so I appreciate your patience.
I'm looking for a way to deploy multiple containers with the same image, however I need to pass in a different config (file) to each?
Right now, my understanding is that once you build an image, that's what gets deployed, but the problem for me is that I don't see the point in building multiple images of the same application when it's only the config that is different between the containers.
If this is the norm, then I'll have to deal with it however if there's another way then please put me out of my misery! :)
Thanks!
Just run from the same image as many times as needed. New containers will be created and they can then be started and stoped each one saving its own configuration. For your convenience would be better to give each of your containers a name with "--name".
F.i:
That's it.
After that you have your containers created forever and you can start and stop them like VMs.
Each container runs with the same RO image but with a RW container specific filesystem layer. The result is each container can have it's own files that are distinct from every other container.
You can pass in configuration on the CLI, as an environment variable, or as a unique volume mount. It's a very standard use case for Docker.
I think looking at examples which are easy to understand could give you the best picture.
What you want to do is perfectly valid, an image should be anything you need to run, without the configuration.
To generate the configuration, you either:
a) volume mounts
use volumes and mount the file during container start
docker run -v my.ini:/etc/mysql/my.ini percona
(and similar withdocker-compose
). Be aware, you can repeat this as often as you like, so mount several configs into your container (so the runtime-version of the image). You will create those configs on the host before running the container and need to ship those files with the container, which is the downside of this approach (portability)b) entry-point based configuration (generation)
Most of the advanced docker images do provide a complex so called entry-point which consumes ENV variables you pass when starting the image, to create the configuration(s) for you, like https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh
so when you run this image, you can do
docker run -e MYSQL_DATABASE=myapp percona
and this will start percona and create the database percona for you. This is all done byOf course, you can do whatever you like with this. E.g this configures a general portus image: https://github.com/EugenMayer/docker-rancher-extra-catalogs/blob/master/templates/registry-slim/11/docker-compose.yml which has this entrypoint https://github.com/EugenMayer/docker-image-portus/blob/master/build/startup.sh
So you see, the entry-point strategy is very common and very powerful and i would suppose to go this route whenever you can.
c) Derived images
Maybe for "completeness", the image-derive strategy, so you have you base image called "myapp" and for the installation X you create a new image
And call this image myapp:x - the obvious issue with this is, you end up having a lot of images, on the other side, compared to a) its much more portable.
Hope that helps