Kubernetes / Rancher 2, mongo-replicaset with Loca

2020-04-09 03:44发布

问题:

I try, I try, but Rancher 2.1 fails to deploy the "mongo-replicaset" Catalog App, with Local Persistent Volumes configured.

How to correctly deploy a mongo-replicaset with Local Storage Volume? Any debugging techniques appreciated since I am new to rancher 2.

I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here.

Note: Deployment without Local Persistent Volumes succeed.

Note: Deployment with Local Persistent Volume and with the "mongo" image succeed (without replicaset version).

Note: Deployment with both mongo-replicaset and with Local Persistent Volume fails.


Step A - Cluster

Create a rancher instance, and:

  1. Add three nodes: a worker, a worker etcd, a worker control plane
  2. Add a label on each node: name one, name two and name three for node Affinity

Step B - Storage class

Create a storage class with these parameters:

  1. volumeBindingMode : WaitForFirstConsumer saw here
  2. name : local-storage

Step C - Persistent Volumes

Add 3 persistent volumes like this:

  1. type : local node path
  2. Access Mode: Single Node RW, 12Gi
  3. storage class: local-storage
  4. Node Affinity: name one (two for second volume, three for third volume)

Step D - Mongo-replicaset Deployment

From catalog, select Mongo-replicaset and configure it like that:

  1. replicaSetName: rs0
  2. persistentVolume.enabled: true
  3. persistentVolume.size: 12Gi
  4. persistentVolume.storageClass: local-storage

Result

After doing ABCD steps, the newly created mongo-replicaset app stay infinitely in "Initializing" state.

The associated mongo workload contain only one pod, instead of three. And this pod has two 'crashed' containers, bootstrap and mongo-replicaset.


Logs

This is the output from the 4 containers of the only running pod. There is no error, no problem.

I can't figure out what's wrong with this configuration, and I don't have any tools or techniques to analyze the problem. Detailed configuration can be found here. Please ask me for more commands results.

Thanks you

回答1:

All this configuration is correct.

It's missing a detail since Rancher is a containerized deployment of kubernetes. Kubelets are deployed on each node in docker containers. They don't access to OS local folders.

It's needed to add a volume binding for the kubelets, like that K8s will be able to create the mongo pod with this same binding.

In rancher: Edit the cluster yaml (Cluster > Edit > Edit as Yaml)

Add the following entry under "services" node:

  kubelet: 
    extra_binds: 
      - "/mongo:/mongo:rshared"