Quickstart: tolerations issues

Hi!

I have a 3-node Kubernetes cluster running with OpenStack VM (Ubuntu 20). I wanted to create an ElasticSearch Cluster but I have to say that I am a bit confused with the instructions, especially about setting up the PV, PC and StorageClass.

I have now a toleration issue on my pod:

# kc describe pod
...
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  59s (x7 over 7m39s)  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't find available persistent volumes to bind.

What could be the problem?

More information:

k8s-controller eck # kc get es
NAME         HEALTH    NODES   VERSION   PHASE             AGE
quickstart   unknown           7.6.1     ApplyingChanges   12m
k8s-controller eck # kc get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                         STORAGECLASS    REASON   AGE
es-storage   1Gi        RWO            Retain           Available   /elasticsearch-data-quickstart-es-default-0   local-storage            15m
k8s-controller eck # kc get pvc
NAME                                         STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
elasticsearch-data-quickstart-es-default-0   Pending                                      local-storage   11m
k8s-controller eck # kc get storageclass
NAME            PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-storage   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  14m

Thank you for your help!

It looks like you have an available PV, but the PVC stays in a Pending status. Why can't it bind to the available PV? Can you inspect the PVC resource, it may give us more details? kubectl describe pvc elasticsearch-data-quickstart-es-default-0

k8s-controller ~ # kubectl describe pvc elasticsearch-data-quickstart-es-default-0 
Name:          elasticsearch-data-quickstart-es-default-0
Namespace:     default
StorageClass:  local-storage
Status:        Pending
Volume:        
Labels:        common.k8s.elastic.co/type=elasticsearch
               elasticsearch.k8s.elastic.co/cluster-name=quickstart
               elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       quickstart-es-default-0
Events:
  Type    Reason               Age                        From                         Message
  ----    ------               ----                       ----                         -------
  Normal  WaitForPodScheduled  4m58s (x26324 over 4d13h)  persistentvolume-controller  waiting for pod quickstart-es-default-0 to be scheduled

or maybe there is a way to do this without PV and PVC at all? With local storage, maybe?

We really recommend using PVs and PVCs, which work well with local storage.
In your case it looks like you have one PV created, but the Pod can't be scheduled on 2 out of 3 k8s nodes. Your PV looks like a local volume, is there a chance it happens to be located on the 1 k8s node that has the incompatible taint?

1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate

The PV's taint configuration is such that it can be only on one of the worker nodes:

apiVersion: v1
kind: PersistentVolume
metadata:
  creationTimestamp: "2021-03-11T17:14:20Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: es-storage
  resourceVersion: "1302627"
  uid: 2725d615-103f-46d0-918a-f70588965e2f
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    name: elasticsearch-data-quickstart-es-default-0
  local:
    path: /mnt/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-worker-1
          - k8s-worker-2
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  volumeMode: Filesystem
status:
  phase: Available

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.