Kubernetes NFS volume mount fail with exit status

2019-03-30 19:47发布

I have a Kubernetes setup installed in my Ubuntu machine. I'm trying to setup a nfs volume and mount it to a container according to this http://kubernetes.io/v1.1/examples/nfs/ document.

nfs service and pod configurations

kind: Service
apiVersion: v1
metadata:
  name: nfs-server
spec:
  ports:
    - port: 2049
  selector:
    role: nfs-server
---
apiVersion: v1
kind: Pod
metadata:
  name: nfs-server
  labels:
    role: nfs-server
spec:
  containers:
    - name: nfs-server
      image: jsafrane/nfs-data
      ports:
        - name: nfs
          containerPort: 2049
      securityContext:
        privileged: true

pod configuration to mount nfs volume

apiVersion: v1
kind: Pod
metadata:
  name: nfs-web
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
      volumeMounts:
          # name must match the volume name below
          - name: nfs
            mountPath: "/usr/share/nginx/html"
  volumes:
    - name: nfs
      nfs:
        # FIXME: use the right hostname
        server: 192.168.3.201
        path: "/"

When I run kubectl describe nfs-web I get following output mentioning it was unable to mount nfs volume. What could be the reason for that?

Name:               nfs-web
Namespace:          default
Image(s):           nginx
Node:               192.168.1.114/192.168.1.114
Start Time:         Sun, 06 Dec 2015 08:31:06 +0530
Labels:             <none>
Status:             Pending
Reason:             
Message:            
IP:             
Replication Controllers:    <none>
Containers:
  web:
    Container ID:   
    Image:      nginx
    Image ID:       
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Environment Variables:
Conditions:
  Type      Status
  Ready     False 
Volumes:
  nfs:
    Type:   NFS (an NFS mount that lasts the lifetime of a pod)
    Server: 192.168.3.201
    Path:   /
    ReadOnly:   false
  default-token-nh698:
    Type:   Secret (a secret that should populate this volume)
    SecretName: default-token-nh698
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Reason      Message
  ───────── ────────    ─────   ────            ─────────────   ──────      ───────
  36s       36s     1   {scheduler }                Scheduled   Successfully assigned nfs-web to 192.168.1.114
  36s       2s      5   {kubelet 192.168.1.114}         FailedMount Unable to mount volumes for pod "nfs-web_default": exit status 32
  36s       2s      5   {kubelet 192.168.1.114}         FailedSync  Error syncing pod, skipping: exit status 32

5条回答
劫难
2楼-- · 2019-03-30 20:27

Having this issue right now... Using coreos-alpha (1010.1.0)

Using Kubernetes v1.2.2_coreos.0 image from quay.io

RHEL for NFS server external to the cluster. Everything works fine. Only issue is pod mounting the nfs share.

PV, PVC creation seems ok...

  16m           5s              77      {kubelet 10.163.224.136}                        Warning         FailedMount     Unable to mount volumes for pod "es-data-xvzxl_default(65b2c286-078e-11e6-99f9-005056a71442)": Mount failed: exit status 32
Mounting arguments: 10.163.224.128:/data/kubefs /var/lib/kubelet/pods/65b2c286-078e-11e6-99f9-005056a71442/volumes/kubernetes.io~nfs/pv0001 nfs []
Output: mount: wrong fs type, bad option, bad superblock on 10.163.224.128:/data/kubefs,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
查看更多
不美不萌又怎样
3楼-- · 2019-03-30 20:30

I had the same problem, and I solved it by installing nfs-common in every Kubernetes nodes.

apt-get install -y nfs-common

My nodes were installed without nfs-common. Kubernetes will ask each node to mount the NFS into a specific directory to be available to the pod. As mount.nfs was not found, the mounting process failed.

Good luck!

查看更多
我命由我不由天
4楼-- · 2019-03-30 20:35

I too run into this mount/syncing issue on v1.1.2 with an independent NFS service running outside of K8s.

Though I haven't been able to figure out if its a bug in K8s or if my NFS server is acting up, I'm thinking its the former as I'm not doing anything special with my NFS, what usually ends up happening is that the Pod eventually restarts itself automatically and things "just work" or I have to kubectl delete/create it manually.

I know this is not optimal nor deterministic of the root issue itself, but its my current band-aid solution.

查看更多
Root(大扎)
5楼-- · 2019-03-30 20:36

It looks like volumes.nfs.server=192.168.3.201 is incorrectly configured on your client. It should be set to the ClusterIP address of your nfs-server Service.

查看更多
ゆ 、 Hurt°
6楼-- · 2019-03-30 20:45

I have the same issue.

fixed by install nfs-utils on the worker nodes.

查看更多
登录 后发表回答