Adding files to standard images using docker-compo

2019-03-15 02:06发布

I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.

One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.

But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?

4条回答
smile是对你的礼貌
2楼-- · 2019-03-15 02:31

This is how I'm doing it with volumes:

services:
  my-db-app:
    command: /shell_scripts/go.sh
    volumes:
      - ./shell_scripts:/shell_scripts 
查看更多
霸刀☆藐视天下
3楼-- · 2019-03-15 02:48

Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.

This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:

version: '2'

services:
  my-db-app:
    build: db/.
    image: custom-db

And db/Dockerfile would look like:

FROM mysql:latest
COPY ./sql /sql

The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.


Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:

tar -cC sql . | docker run --rm -it -v sql-files:/sql \
  busybox /bin/sh -c "tar -xC /sql"

Run that via a script and then have that same script bounce the db container to reload that config.


Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:

# create a reusable volume
$ docker volume create --driver local \
    --opt type=nfs \
    --opt o=addr=192.168.1.1,rw \
    --opt device=:/path/to/dir \
    foo

# or from the docker run command
$ docker run -it --rm \
  --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
  foo

# or to create a service
$ docker service create \
  --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
  foo

Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:

version: '3.4'

configs:
  sql_file_1:
    file: ./file_1.sql

services
  my-db-app:
    image: my-db-app:latest
    configs:
      - source: sql_file_1
        target: /sql/file_1.sql
        mode: 444

Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.

查看更多
我只想做你的唯一
4楼-- · 2019-03-15 02:48

As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).

version: '3.3'
services:
  my-db-app:
    command: /shell_scripts/go.sh
    volumes:
      shell_scripts:/shell_scripts 
volumes:
    shell_scripts:
      driver: "cloudstor:aws"
查看更多
Emotional °昔
5楼-- · 2019-03-15 02:50

i think you had to do in a compose file:

volumes:
 - src/file:dest/path
查看更多
登录 后发表回答