I have a setup with docker in docker and try to mount folders.
Let's say I have those folders that I wish to share with his parent. On the host, I created a file in /tmp/dind called foo
. Host starts container 1, which starts container 2. This is the result I want to have.
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<-------> <------->
Instead, I get
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<------->
<----------------------->
Code here:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind2:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
This outputs nothing, while the next command gives foo as result. I changed the mounted volume:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
The question is, what do I need to do in order to use Container 1 path and not host? Or do I misunderstand something about docker here?
For all that you say “Docker-in-Docker” and “dind”, this setup isn’t actually Docker-in-Docker: your container1 is giving instructions to the host’s Docker daemon that affect container2.
Host Container1
/-----
(Docker)
| Container2
\---->
(NB: this is generally the recommended path for CI-type setups. “Docker-in-Docker” generally means container1 is running its own, separate, Docker daemon, which tends to not be recommended.)
Since container1 is giving instructions to the host’s Docker, and the host’s Docker is launching container2, any docker run -v
paths are always the host’s paths. Unless you know that some specific directory has already been mounted into your container, it’s hard to share files with “sub-containers”.
One way to get around this is to assert that there is a shared path of some sort:
docker run \
-v $PWD/exchange:/exchange \
-v /var/run/docker.sock:/var/run/docker.sock \
-e EXCHANGE_PATH=$PWD/exchange \
--name container1
...
# from within container1
mkdir $EXCHANGE_PATH/container2
echo hello world > $EXCHANGE_PATH/container2/file.txt
docker run \
-v $EXCHANGE_PATH/container2:/data
--name container2
...
When I’ve done this in the past (for a test setup that wanted to launch helper containers) I’ve used a painstaking docker create
, docker cp
, docker start
, docker cp
, Docker rm
sequence. That’s extremely manual, but it has the advantage that the “local” side of a docker cp
is always the current filesystem context even if you’re talking to the host’s Docker daemon from within a container.
It does not matter if container 2 binds the host path, because the changes to files in container 1 directly affect everything on the host path. So they all work on the same files.
So your setup is correct and will function the same as if they referenced in the way you described.
Update
If you want to make sure that the process do not modify the host files you could do the following:
Build a custom docker images which copies all data from folder a
to folder b
, where you execute the script on folder b
. And then mount the files with ./:/a
. This way you maintain flexibility on which files you bind to the container without letting the container modify the host files.
I hope this answers your question :)