Kubernetes PersistentVolumeClaim issues in AWS

2019-06-08 20:54发布

We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).

But when I am trying to create more the one replica, my pods are not creating successfully. When I am trying to create volumes, it's creating in only one availability zone. If my pod is created in a different zone node, since my volume is already created in different zone, due to that my pod is not creating successfully. How to create volumes in different zones for same application? How to make it successful, along with replica? How to create my persistent volumes claims?

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongo-pvc
  labels:
    type: amazonEBS
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: mongo-pp
  name: mongo-controller-pp
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: mongo-pp
    spec:
      containers:
      - image: mongo
        name: mongo-pp
        ports:
        - name: mongo-pp
          containerPort: 27017
          hostPort: 27017
        volumeMounts:
        - mountPath: "/opt/couchbase/var"
          name: mypd1
      volumes:
      - name: mypd1
        persistentVolumeClaim:
          claimName: mongo-pvc

标签: kubernetes
2条回答
Rolldiameter
2楼-- · 2019-06-08 21:26

When you are using ReadWriteOnce volumes (ones that can not be mounted to multiple pods at the same time), simple PV/PVC creation will not cut it.

Both PV and PVC are pretty "singular" in a way that if you refer in Deployment to a particular claim name, your pods will all try to get the same one claim and the same one pv bound to that claim, resulting in a race condition where only one of the pods will be the first and only allowed to mount that RWO storage.

To mitigate this, you should use not PVC directly but via volumeClaimTemplates that will create PVC dynamicaly for every new pod scaled, like below :

 volumeClaimTemplates:
  - metadata:
      name: claimname
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
查看更多
迷人小祖宗
3楼-- · 2019-06-08 21:48

I think the problem your a facing is caused by the underlying storage mechanism, in this case EBS.

When scaling Pods behind a replication controller, each replica will attempt to mount the same persistent volume. If you look at the K8 docs in regards to EBS, you will see the following:

There are some restrictions when using an awsElasticBlockStore volume: the nodes on which pods are running must be AWS EC2 instances those instances need to be in the same region and availability-zone as the EBS volume EBS only supports a single EC2 instance mounting a volume

So by default, when you scale up behind a replication controller, Kubernetes will try to spread across different nodes, this means that a second node is trying to mount this volume which is not allowed for EBS.

Basically, I see that you have two options.

  1. Use a different volume type. nfs, Glusterfs etc
  2. Use a StatefulSet instead of a replication controller and have each replica mount an independent volume. Would require database replication but provide high availability.
查看更多
登录 后发表回答