How to execute a shell command before the ENTRYPOI

2020-05-16 04:26发布

问题:

I have the following file for my nodejs project

FROM node:boron

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install



# Bundle app source
COPY . /usr/src/app

# Replace with env variable
RUN envsubs < fil1 > file2

EXPOSE 8080
CMD [ "npm", "start" ]

I run the docker container with the -e flag providing the environment variable

But I do not see the replacement. Will the Run ccommand be excuted when the env variable is available?

回答1:

Images are immutable

Dockerfile defines the build process for an image. Once built, the image is immutable (cannot be changed). Runtime variables are not something that would be baked into this immutable image. So Dockerfile is the wrong place to address this.

Using an entrypoint script

What you probably want to to do is override the default ENTRYPOINT with your own script, and have that script do something with environment variables. Since the entrypoint script would execute at runtime (when the container starts), this is the correct time to gather environment variables and do something with them.

First, you need to adjust your Dockerfile to know about an entrypoint script. While Dockerfile is not directly involved in handling the environment variable, it still needs to know about this script, because the script will be baked into your image.

Dockerfile:

COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]
CMD ["npm", "start"]

Now, write an entrypoint script which does whatever setup is needed before the command is run, and at the end, exec the command itself.

entrypoint.sh:

#!/bin/sh

# Where $ENVSUBS is whatever command you are looking to run
$ENVSUBS < fil1 > file2

npm install

# This will exec the CMD from your Dockerfile, i.e. "npm start"
exec "$@"

Here, I have included npm install, since you asked about this in the comments. I will note that this will run npm install on every run. If that's appropriate, fine, but I wanted to point out it will run every time, which will add some latency to your startup time.

Now rebuild your image, so the entrypoint script is a part of it.

Using environment variables at runtime

The entrypoint script knows how to use the environment variable, but you still have to tell Docker to import the variable at runtime. You can use the -e flag to docker run to do so.

docker run -e "ENVSUBS=$ENVSUBS" <image_name>

Here, Docker is told to define an environment variable ENVSUBS, and the value it is assigned is the value of $ENVSUBS from the current shell environment.

How entrypoint scripts work

I'll elaborate a bit on this, because in the comments, it seemed you were a little foggy on how this fits together.

When Docker starts a container, it executes one (and only one) command inside the container. This command becomes PID 1, just like init or systemd on a typical Linux system. This process is responsible for running any other processes the container needs to have.

By default, the ENTRYPOINT is /bin/sh -c. You can override it in Dockerfile, or docker-compose.yml, or using the docker command.

When a container is started, Docker runs the entrypoint command, and passes the command (CMD) to it as an argument list. Earlier, we defined our own ENTRYPOINT as /entrypoint.sh. That means that in your case, this is what Docker will execute in the container when it starts:

/entrypoint.sh npm start

Because ["npm", "start"] was defined as the command, that is what gets passed as an argument list to the entrypoint script.

Because we defined an environment variable using the -e flag, this entrypoint script (and its children) will have access to that environment variable.

At the end of the entrypoint script, we run exec "$@". Because $@ expands to the argument list passed to the script, this will run

exec npm start

And because exec runs its arguments as a command, replacing the current process with itself, when you are done, npm start becomes PID 1 in your container.

Why you can't use multiple CMDs

In the comments, you asked whether you can define multiple CMD entries to run multiple things.

You can only have one ENTRYPOINT and one CMD defined. These are not used at all during the build process. Unlike RUN and COPY, they are not executed during the build. They are added as metadata items to the image once it is built.

It is only later, when the image is run as a container, that these metadata fields are read, and used to start the container.

As mentioned earlier, the entrypoint is what is really run, and it is passed the CMD as an argument list. The reason they are separate is partly historical. In early versions of Docker, CMD was the only available option, and ENTRYPOINT was fixed as being /bin/sh -c. But due to situations like this one, Docker eventually allowed ENTRYPOINT to be defined by the user.



回答2:

Will the Run ccommand be excuted when the env variable is available?

Environnement variables set with -e flag are set when you run the container.

Problem is, Dockerfile is read on container build, so the RUN command will not be aware of thoses environnement variables.

The way to have environment variables set on build, is to add in your Dockerfile, ENV line. (https://docs.docker.com/engine/reference/builder/#/environment-replacement)

So your Dockerfile may be :

FROM node:latest

WORKDIR /src
ADD package.json .

ENV A YOLO

RUN echo "$A"

And the output :

$ docker build .
Sending build context to Docker daemon  2.56 kB
Step 1 : FROM node:latest
 ---> f5eca816b45d
Step 2 : WORKDIR /src
 ---> Using cache
 ---> 4ede3b23756d
Step 3 : ADD package.json .
 ---> Using cache
 ---> a4671a30bfe4
Step 4 : ENV A YOLO
 ---> Running in 7c325474af3c
 ---> eeefe2c8bc47
Removing intermediate container 7c325474af3c
Step 5 : RUN echo "$A"
 ---> Running in 35e0d85d8ce2
YOLO
 ---> 78d5df7d2322

You see at the before-last line when the RUN command launched, the container is aware the envrionment variable is set.



回答3:

For images with bash as the default entrypoint, this is what I do to allow myself to run some scripts before shell start if needed:

FROM ubuntu
COPY init.sh /root/init.sh
RUN echo 'a=(${BEFORE_SHELL//:/ }); for c in ${a[@]}; do source $x; done' >> ~/.bashrc

and if you want to source a script at container login you pass its path in the environment variable BEFORE_SHELL. Example using docker-compose:

version: '3'
services:
  shell:
    build:
      context: .
    environment:
      BEFORE_SHELL: '/root/init.sh'

Some remarks:

  • If BEFORE_SHELL is not set then nothing happens (we have the default behavior)
  • You can pass any script path available in the container, included mounted ones
  • The scripts are sourced so variables defined in the scripts will be available in the container
  • Multiple scripts can be passed (use a : to separate the paths)