My current setup for running a docker container is on the lines of this:
- I've got a
main.env
file:
# Main export PRIVATE_IP=\`echo localhost\` export MONGODB_HOST="$PRIVATE_IP" export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
In my service file (upstart), I source this file
. /path/to/main.env
I then call
docker run
with multiple-e
for each of the environment variables I want inside of the container. In this case I would call something like:docker run -e MONGODB_URL=$MONGODB_URL ubuntu bash
I would then expect
MONGODB_URL
inside of the container to equalmongodb://localhost:27017/development
. Notice that in realityecho localhost
is replaced by acurl
to amazon's api for an actualPRIVATE_IP
.
This becomes a bit unwieldy when you start having more and more environment variables you need to give your container. There is a fine point to see here which is that the environment variables need to be resolved at run time, such as with a call to curl
or by referring to other env variables.
The solution I was hoping to use is:
- calling
docker run
with an--env-file
parameter such as this:
# Main PRIVATE_IP=\`echo localhost\` MONGODB_HOST="$PRIVATE_IP" MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
- Then my
docker run
command would be significantly shortened todocker run --env-file=/path/to/main.env ubuntu bash
(keep in mind usually I've got around 12-15 environment variables.
This is where I hit my problem which is that inside the container none of the variables resolve as expected. Instead I end up with:
- PRIVATE_IP=`echo localhost`
- MONGODB_HOST="$PRIVATE_IP"
- MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
I could circumvent this by doing the following:
- Sourcing the
main.env
file. - Creating a file containing just the names of the variables I want (meaning docker would search for them in the environment).
- Then calling
docker run
with this file as an argument to--env-file
. This would work but would mean I would need to maintain two files instead of one, and really wouldn't be that big of an improvement of the current situation.
What I would prefer is to have the variables resolve as expected.
The closest question to mine that I could find is: 12factor config approach with Docker
I had this issue when using
docker run
in a separate run scriptrun.sh
file, since I wanted the credentialsADMIN_USER
andADMIN_PASSWORD
to be accessible in the container, but not show up in the command.Following the other answers and passing a separate environment file with
--env
or--env-file
didn't work for my image (though it worked for the Bash image). What worked was creating a separate env file......and sourcing it in the run script when launching the container:
What you can do is create a startup script that can be run when the container starts. So if your current docker file looks something like this
Change it to
In your start.sh script do the following:
I had a very similar problem to this. If I passed the contents of the env file to docker as separate -e directives then everything ran fine however if I passed the file using --env-file the container failed to run properly.
Turns out there were some spurious line endings in the file (I had copied from windows and ran docker in Ubuntu). When I removed them the container ran the same with --env or --env-file.
Both
--env
and--env-file
setup variables as is and do not replace nested variables.Solomon Hykes talks about configuring containers at run time and the the various approaches. The one that should work for you is to volume mounting the main.env from host into the container and sourcing it.
creating an ENV file that is nothing more than key/value pairs can be processed in normal shell commands and appended to the environment. Look at the bash -a pragma.