Kubernetes Unable to mount volumes with cloud-provider

0 votes

I am using cinder plugin in kubernetes to create static Persistent Volumes as well as Storage Classes, but none of the pods are mounting the devices.

Kubernetes Version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

Kubelet command and status

systemctl status kubelet -l
● kubelet.service - Kubelet service
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
  Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
  Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
 Main PID: 2408 (kubelet)
   CGroup: /system.slice/kubelet.service
           ├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf

My cloud.conf file:

# cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne

Kuberentes can communicate with openstack. This is from /var/log/messages:

kubelet: I1020 11:43:51.770948    2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642    2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679    2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688    2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332    2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]

Persistent Volume and persistent volume claim and the cinder list output:

# cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: jk-test
  labels:
    type: test
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  cinder:
    volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
    fsType: ext4

# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available |              jk-cinder              |  10  |      -      |  false   |          

As you can see, the device referred in pv.yml seems to be available and when I create them, things do seem to work:

NAME         CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM             REASON    AGE
pv/jk-test   10Gi       RWO           Retain          Bound     default/myclaim             5h
NAME               STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
pvc/myclaim        Bound     jk-test   10Gi       RWO           5h

And after this when I try to create a pod using the pvc it wails to mount:

# cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: jk-test3
  labels:
    name: jk-test
spec:
  containers:
    - name: front-end
      image: example-front-end:latest
      ports:
        - hostPort: 6000
          containerPort: 3000
  volumes:
    - name: jk-test
      persistentVolumeClaim:
        claimName: myclaim

the state of the pod:

  3h            46s             109     {kubelet jk-kube2-master}                       Warning         FailedMount     Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
  3h            46s             109     {kubelet jk-kube2-master}                       Warning         FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]

I have already verified that openstack is exposing cinder v1 and v2 API's and the old logs from openstack_instances show that nova API can be accessed.

Relevant logs in my opinion regarding the failure to mount:

kubelet: I1020 06:51:11.840341   24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424   24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474   24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176   24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330   24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361   24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390   24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420   24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566   24027 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950\"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.

I've been following this guide here: k8s - mysql-cinder-pd example but there's no communication whatsoever. I even tried defining another storage class like the one provided by kubernetes. Beflow are the pvc and storage classes mentioned:

# cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: gold
provisioner: kubernetes.io/cinder
parameters:
  availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: dynamicclaim
  annotations:
    volume.beta.kubernetes.io/storage-class: "gold"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

The StorageClass shows succes but still if I try an create a PVC it stuck in pending state and shows no volume plugin matched:

# kubectl get storageclass
NAME      TYPE
gold      kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name:           dynamicclaim
Namespace:      default
Status:         Pending
Volume:
Labels:         <none>
Capacity:
Access Modes:
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  1d            15s             5867    {persistentvolume-controller }                  Warning         ProvisioningFailed      no volume plugin matched

This is different than what's in the logs for the loaded plugins:

grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517   22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"

I do have nova and cinder installed on my machine:

# which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder
Please help!
Nov 30, 2018 in Kubernetes by ffdfd
• 5,550 points
4,542 views

1 answer to this question.

0 votes

Kuberenetes 1.5.0 and 1.5.3 does support cinder. To give you a short and crisp answer. You're missing volumeMounts: section in your pod spec.

For the longer version. When you already have an existing cinder volume you can just use a Pod or Deployment and there's no need of either persistent volume or a persistent volume claim.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vol-test
  labels:
    fullname: vol-test
spec:
  strategy:
    type: Recreate
  replicas: 1
  template:
    metadata:
      labels:
        fullname: vol-test
    spec:
      containers:
        - name:  nginx
          image: "nginx:1.11.6-alpine"
          imagePullPolicy: IfNotPresent
          args:
            - /bin/sh
            - -c
            - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
          ports:
            - name: http
              containerPort: 80
          volumeMounts:
            - name: data
              mountPath: /usr/share/nginx/html/
      volumes:
        - name: data
          cinder:
            volumeID: e143368a-440a-400f-b8a4-dd2f46c51888

This creates a deployment and a pod and your cinder volume will mount the nginx container. In order to verify just edit a file inside nginx container, inside the /usr/share/nginx/html/ folder and stop the container. This will make force kubernetes to create a new container and you'll be able to see the same edited file inside the new container. When you delete the said deployment resource, the cinder volumet detaches itself from the VM.

Now if you want to use already have a cinder volume and want to use PV's and PVC's instead. For you problem here:

A PV with no annotation or its class annotation set to "" has no class and can only be bound to PVCs that request no particular class

example of a storage class

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  # to be used as value for annotation:
  # volume.beta.kubernetes.io/storage-class
  name: cinder-gluster-hdd
provisioner: kubernetes.io/cinder
parameters:
  # openstack volume type
  type: gluster_hdd
  # openstack availability zone
  availability: nova

Then you have to use you existing cinder volume ID with a PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  # name of a pv resource visible in Kubernetes, not the name of
  # a cinder volume
  name: pv0001
  labels:
    pv-first-label: "123"
    pv-second-label: abc
  annotations:
    volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  cinder:
    # ID of cinder volume
    volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950

Now create a PVC with same labels to match your PV labels.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: vol-test
  labels:
    pvc-first-label: "123"
    pvc-second-label: abc
  annotations:
    volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd"
spec:
  accessModes:
    # the volume can be mounted as read-write by a single node
    - ReadWriteOnce
  resources:
    requests:
      storage: "1Gi"
  selector:
    matchLabels:
      pv-first-label: "123"
      pv-second-label: abc

And now a deployment:

kind: Deployment
metadata:
  name: vol-test
  labels:
    fullname: vol-test
    environment: testing
spec:
  strategy:
    type: Recreate
  replicas: 1
  template:
    metadata:
      labels:
        fullname: vol-test
        environment: testing
    spec:
      nodeSelector:
        "is_worker": "true"
      containers:
        - name:  nginx-exist-vol
          image: "nginx:1.11.6-alpine"
          imagePullPolicy: IfNotPresent
          args:
            - /bin/sh
            - -c
            - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
          ports:
            - name: http
              containerPort: 80
          volumeMounts:
            - name: data
              mountPath: /usr/share/nginx/html/
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: vol-test


Ready to master Kubernetes and take your container orchestration skills to the next level? Join our comprehensive Kubernetes Course today!

answered Nov 30, 2018 by DareDev
• 6,890 points

Related Questions In Kubernetes

0 votes
1 answer

unable to start Kubernetes due to so many open files in system

You can try the following steps: You can ...READ MORE

answered May 1, 2018 in Kubernetes by shubham
• 7,340 points
1,807 views
+1 vote
1 answer

Unable to access kubernetes dashboard

You’re trying to access a private IP. ...READ MORE

answered Aug 27, 2018 in Kubernetes by Kalgi
• 52,360 points
2,582 views
0 votes
1 answer

Unable to run Kubernetes on rancher cluster

switch Docker to 1.12.x; Kubernetes doesn't support ...READ MORE

answered Aug 28, 2018 in Kubernetes by Kalgi
• 52,360 points
1,095 views
0 votes
1 answer

Issue with Kubernetes ingress routing to Nextjs applications

You’re using nginx ingress controller which does ...READ MORE

answered Sep 11, 2018 in Kubernetes by Kalgi
• 52,360 points
2,112 views
+1 vote
1 answer
0 votes
3 answers

Error while joining cluster with node

Hi Kalgi after following above steps it ...READ MORE

answered Jan 17, 2019 in Others by anonymous
14,523 views
+4 votes
1 answer

Installing Web UI (Dashboard):kubernetes-dashboard on main Ubuntu 16.04.6 LTS (Xenial Xerus) server

Follow these steps: $ kubeadm reset $ kubeadm init ...READ MORE

answered Apr 12, 2019 in Kubernetes by Kalgi
• 52,360 points

reshown Apr 12, 2019 by Kalgi 5,976 views
0 votes
1 answer

Unable to use kubernetes executer for Gitlabs

You are trying to use https, so ...READ MORE

answered Dec 26, 2018 in Kubernetes by DareDev
• 6,890 points
1,318 views
0 votes
1 answer

Unable to get cgroup stats for docker and kubelet services

Try and start kubelet with the following ...READ MORE

answered Sep 3, 2018 in Kubernetes by DareDev
• 6,890 points
4,644 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP