Facing an issue with attaching EFS volume to Kuber

2019-02-25 09:31发布

问题:

I am running my docker containers with the help of kubernetes cluster on aws eks. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.

I created an EFS volume and mounted. I am following link to create PersistentVolumeClaim. But I am getting timeout error when efs-provider pod trying to attach mounted EFS volume space. VolumeId, region are correct only.

Detailed Error message for Pod describe:

timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw] MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32

回答1:

The issue was, I had 2 ec2 instances running, but I mounted EFS volume to only one of the ec2 instances and kubectl was always deploying pods on the ec2 instance which doesn't have the mounted volume. Now I mounted the same volume to both the instances and using PVC, PV like below. It is working fine.

ec2 mounting: AWS EFS mounting with EC2

PV.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: efs_public_dns.amazonaws.com
    path: "/"

PVC.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: efs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi

replicaset.yml

----- only volume section -----

 volumes:
  - name: test-volume
    persistentVolumeClaim:
      claimName: efs


回答2:

AWS EFS uses NFS type volume plugin, and As per Kubernetes Storage Classes NFS volume plugin does not come with internal Provisioner like EBS.

So the steps will be:

  1. Create an external Provisioner for NFS volume plugin.
  2. Create a storage class.
  3. Create one volume claim.
  4. Use volume claim in Deployment.

    • In the configmap section change the file.system.id: and aws.region: to match the details of the EFS you created.

    • In the deployment section change the server: to the DNS endpoint of the EFS you created.


---
apiVersion: v1
kind: ConfigMap
metadata:
  name: efs-provisioner
data:
  file.system.id: yourEFSsystemid
  aws.region: regionyourEFSisin
  provisioner.name: example.com/aws-efs

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: efs-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate 
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:latest
          env:
            - name: FILE_SYSTEM_ID
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: file.system.id
            - name: AWS_REGION
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: aws.region
            - name: PROVISIONER_NAME
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: provisioner.name
          volumeMounts:
            - name: pv-volume
              mountPath: /persistentvolumes
      volumes:
        - name: pv-volume
          nfs:
            server: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
            path: /

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-efs
provisioner: example.com/aws-efs

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: efs
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

For more explanation and details go to https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs



回答3:

The problem for me was that I was specifying a different path in my PV than /. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.