I am running my docker containers with the help of kubernetes cluster on aws eks. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.
I created an EFS volume and mounted. I am following link to create PersistentVolumeClaim. But I am getting timeout error when efs-provider pod trying to attach mounted EFS volume space. VolumeId, region are correct only.
Detailed Error message for Pod describe:
timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw] MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32
The issue was, I had 2 ec2 instances running, but I mounted EFS volume to only one of the ec2 instances and kubectl was always deploying pods on the ec2 instance which doesn't have the mounted volume. Now I mounted the same volume to both the instances and using PVC, PV like below. It is working fine.
ec2 mounting: AWS EFS mounting with EC2
PV.yml
PVC.yml
replicaset.yml
----- only volume section -----
The problem for me was that I was specifying a different path in my PV than
/
. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.AWS EFS uses NFS type volume plugin, and As per Kubernetes Storage Classes NFS volume plugin does not come with internal Provisioner like EBS.
So the steps will be:
Use volume claim in Deployment.
In the configmap section change the file.system.id: and aws.region: to match the details of the EFS you created.
In the deployment section change the server: to the DNS endpoint of the EFS you created.
For more explanation and details go to https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs