I have a compose file with v3 where there are 3 services sharing/using the same volume. While using swarm mode we need to create extra containers & volumes to manage our services across the cluster.
I am planning to use NFS server so that single NFS share will get mounted directly on all the hosts within the cluster.
I have found below two ways of doing it but it needs extra steps to be performed on the docker host -
Mount the NFS share using "fstab" or "mount" command on the host & then use it as a host volume for docker services.
Use Netshare plugin - https://github.com/ContainX/docker-volume-netshare
Is there a standard way where i can directly use/mount NFS share using docker compose v3 by performing only few/no steps(I understand that "nfs-common" package is required anyhow) on the docker host?
After discovering that this is massively undocumented,here's the correct way to mount a NFS volume using stack and docker compose.
The most important thing is that you need to be using
version: "3.2"
or higher. You will have strange and un-obvious errors if you don't.The second issue is that volumes are not automatically updated when their definition changes. This can lead you down a rabbit hole of thinking that your changes aren't correct, when they just haven't been applied. Make sure you
docker rm VOLUMENAME
everywhere it could possibly be, as if the volume exists, it won't be validated.The third issue is more of a NFS issue - The NFS folder will not be created on the server if it doesn't exist. This is just the way NFS works. You need to make sure it exists before you do anything.
(Don't remove 'soft' and 'nolock' unless you're sure you know what you're doing - this stops docker from freezing if your NFS server goes away)
Here's a complete example:
Now, on swarm-4:
This volume will be created (but not destroyed) on any swarm node that the stack is running on.
My problem was solved with changing driver option type to NFS4.
Yes you can directly reference an NFS from the compose file:
And in an analogous way you could create an nfs volume on each host.
My solution for AWS EFS, that works:
Install nfs-common package:
sudo apt-get install -y nfs-common
Check if your efs works:
ls -la efs-test-point/
Configure docker-compose.yml file:
Depending on how I need to use the volume, I have the following 3 options.
First, you can create the named volume directly and use it as an external volume in compose, or as a named volume in a
docker run
ordocker service create
command.Next, there is the
--mount
syntax that works fromdocker run
anddocker service create
. This is a rather long option, and when you are embedded a comma delimited option within another comma delimited option, you need to pass some quotes (escaped so the shell doesn't remove them) to the command being run. I tend to use this for a one-off container that needs to access NFS (e.g. a utility container to setup NFS directories):Lastly, you can define the named volume inside your compose file. One important note when doing this, the name volume only gets created once, and not updated with any changes. So if you ever need to modify the named volume you'll want to give it a new name.
In each of these examples:
nfs
, notnfs4
. This is because docker provides some nice functionality on theaddr
field, but only for thenfs
type.o
are the options that gets passed to the mount syscall. One difference between the mount syscall and the mount command in Linux is the device has the portion before the:
moved into anaddr
option.nfsvers
is used to set the NFS version. This avoids delays as the OS tries other NFS versions first.addr
may be a DNS name when you usetype=nfs
, rather than only an IP address. Very useful if you have multiple VPC's with different NFS servers using the same DNS name, or if you want to adjust the NFS server in the future without updating every volume mount.rw
(read-write) can be passed to theo
option.device
field is the path on the remote NFS server. The leading colon is required. This is an artifact of how the mount command moves the IP address to theaddr
field for the syscall. This directory must exist on the remote host prior to the volume being mounted into a container.--mount
syntax, thedst
field is the path inside the container. For named volumes, you set this path on the right side of the volume mount (in the short syntax) on yourdocker run -v
command.If you get permission issues accessing a remote NFS volume, a common cause I've encountered is containers running as root, with the NFS server set to root squash (changing all root access to the nobody user). You either need to configure your containers to run as a well known non-root UID that has access to the directories on the NFS server, or disable root squash on the NFS server.