Copying files from Docker container to host

2019-01-02 18:41发布

I'm thinking of using Docker to build my dependencies on a continuous integration (CI) server, so that I don't have to install all the runtimes and libraries on the agents themselves. To achieve this I would need to copy the build artifacts that are built inside the container back into the host.

Is that possible?

14条回答
余欢
2楼-- · 2019-01-02 19:34

You do not need to use docker run

You can do it with docker create

From the docs The docker create command creates a writeable container layer over the specified image and prepares it for running the specified command. The container ID is then printed to STDOUT. This is similar to docker run -d except the container is never started.

So, you can do

docker create -ti --name dummy IMAGE_NAME bash
docker cp dummy:/path/to/file /dest/to/file
docker rm -fv dummy

Here, you never start the container. That looked beneficial to me.

查看更多
浪荡孟婆
3楼-- · 2019-01-02 19:37

Mount a volume, copy the artifacts, adjust owner id and group id:

mkdir artifacts
docker run -i --rm -v ${PWD}/artifacts:/mnt/artifacts centos:6 /bin/bash << COMMANDS
ls -la > /mnt/artifacts/ls.txt
echo Changing owner from \$(id -u):\$(id -g) to $(id -u):$(id -u)
chown -R $(id -u):$(id -u) /mnt/artifacts
COMMANDS
查看更多
只靠听说
4楼-- · 2019-01-02 19:38

Mount a "volume" and copy the artifacts into there:

mkdir artifacts
docker run -i -v ${PWD}/artifacts:/artifacts ubuntu:14.04 sh << COMMANDS
# ... build software here ...
cp <artifact> /artifacts
# ... copy more artifacts into `/artifacts` ...
COMMANDS

Then when the build finishes and the container is no longer running, it has already copied the artifacts from the build into the artifacts directory on the host.

EDIT:

CAVEAT: When you do this, you may run into problems with the user id of the docker user matching the user id of the current running user. That is, the files in /artifacts will be shown as owned by the user with the UID of the user used inside the docker container. A way around this may be to use the calling user's UID:

docker run -i -v ${PWD}:/working_dir -w /working_dir -u $(id -u) \
    ubuntu:14.04 sh << COMMANDS
# Since $(id -u) owns /working_dir, you should be okay running commands here
# and having them work. Then copy stuff into /working_dir/artifacts .
COMMANDS
查看更多
回忆,回不去的记忆
5楼-- · 2019-01-02 19:38

As a more general solution, there's a CloudBees plugin for Jenkins to build inside a Docker container. You can select an image to use from a Docker registry or define a Dockerfile to build and use.

It'll mount the workspace into the container as a volume (with appropriate user), set it as your working directory, do whatever commands you request (inside the container). You can also use the docker-workflow plugin (if you prefer code over UI) to do this, with the image.inside() {} command.

Basically all of this, baked into your CI/CD server and then some.

查看更多
闭嘴吧你
6楼-- · 2019-01-02 19:39

In order to copy a file from a container to the host, you can use the command

docker cp <containerId>:/file/path/within/container /host/path/target

Here's an example:

[jalal@goku scratch]$ sudo docker cp goofy_roentgen:/out_read.jpg .

Here goofy_roentgen is the name I got from the following command:

[jalal@goku scratch]$ sudo docker ps
[sudo] password for jalal:
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                            NAMES
1b4ad9311e93        bamos/openface      "/bin/bash"         33 minutes ago      Up 33 minutes       0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp   goofy_roentgen
查看更多
人气声优
7楼-- · 2019-01-02 19:41
登录 后发表回答