I'm creating an image that has a similar problem like the following docker project:
Dockerfile
FROM alpine:3.9.3
COPY ./env.sh /env.sh
RUN source /env.sh
CMD env
env.sh
TEST=test123
I built the image with
docker build -t sandbox .
and run it with
docker run --rm sandbox
The output is
HOSTNAME=72405c43801b
SHLVL=1
HOME=/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
My environment variable is missing.
In the real project I have to source a longer complex script for the installation for IBM DB2 client that also sets environment variables. How can I achieve it without reading the whole installation process and setting all variables with ENV
in the dockerfile?
EDIT:
In the real project the file env.sh
is created as part of the installation process and it is not available from outside of the container. The environment variables are set depending on the system it is executed on. If I run it on the host it will set wrong variables in the guest.
Part of the real script is
if [ -f ${INST_DIR?}/tools/clpplus.jar ]; then
AddRemoveString CLASSPATH ${INST_DIR?}/tools/clpplus.jar a
fi
if [ -f ${INST_DIR?}/tools/antlr-3.2.jar ]; then
AddRemoveString CLASSPATH ${INST_DIR?}/tools/antlr-3.2.jar a
fi
if [ -f ${INST_DIR?}/tools/jline-0.9.93.jar ]; then
AddRemoveString CLASSPATH ${INST_DIR?}/tools/jline-0.9.93.jar a
fi
if [ -f ${INST_DIR?}/java/db2jcc.jar ]; then
AddRemoveString CLASSPATH ${INST_DIR?}/java/db2jcc.jar a
fi
if [ -f ${INST_DIR?}/java/db2jcc_license_cisuz.jar ]; then
AddRemoveString CLASSPATH ${INST_DIR?}/java/db2jcc_license_cisuz.jar a
fi
It checks the installation and sets the variables depending on this. Since on the host is no DB2 installation the variables wouldn't be set.
Each Dockerfile
RUN
step runs a new container and a new shell. If you try to set an environment variable in one shell, it will not be visible later on. For example, you might experiment with this Dockerfile:There are three good solutions to this. In order from easiest/best to hardest/most complex:
Avoid needing the environment variables at all. Install software into “system” locations like
/usr
; it will be isolated inside the Docker image anyways. (Don’t use an additional isolation tool like Python virtual environments, or a version manager likenvm
orrvm
; just install the specific thing you need.)Use
ENV
. This will work:Use an entrypoint script. This typically looks like:
COPY
this script into your Dockerfile. Make it be theENTRYPOINT
; make theCMD
be the thing you’re actually running.If you care about such things, environment variables you set via this approach won’t be visible in
docker inspect
or adocker exec
debug shell; but if youdocker run -it ... sh
they will be visible. This is a useful and important enough pattern that I almost always useCMD
in my Dockerfiles unless I’m specifically trying to do first-time setup like this.I ended up do a multistep build of the dockerfile in a bash script.
Step 1. Setup your Dockerfile to include everything up to the point where you need to source a file for environment variables.
Step2. In the docker file source the environment variables and echo the environment to a file.
RUN source $(pwd)/buildstepenv_rhel72_64.sh && source /opt/rh/devtoolset-8/enable && env | sort -u > /tmp.env"
Step 3 build the image with a tag.
docker build -t ${image}_dev .
Step 4 run the image using the tag to get the environment variables and add them to the end of the docker file
docker run --rm ${image}_dev cat /tmp.env | sed 's/$/"/;s/=/="/;s/^/ENV /' >> logs/docker/Dockerfile.${step}
Step 5 construct the remainder of your dockerfile
That should do the trick:
Hope it helps!
I found an alternative option that I like better:
Configure an ENTRYPOINT dockerfile step, that sources the file, and then runs the CMD received by argument: