Pod has unbound immediate PersistentVolumeClaims

Hello,

I'm trying to deploy ECK into Open Telekom Cloud. Before I have tested it in GCP Kubernetes Engine and it works like a charm.

Unfortunately I have some problem in Open Telekom Cloud related to PersistentVolumeClaims. I'm using https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html and it seems pvc is not created for quick-start-es-default-0 pod:

kubectl describe pod quickstart-es-default-0
Name:               quickstart-es-default-0
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             common.k8s.elastic.co/type=elasticsearch
                    controller-revision-hash=quickstart-es-default-5d7cbf6c5b
                    elasticsearch.k8s.elastic.co/cluster-name=quickstart
                    elasticsearch.k8s.elastic.co/config-hash=2034778696
                    elasticsearch.k8s.elastic.co/http-scheme=https
                    elasticsearch.k8s.elastic.co/node-data=true
                    elasticsearch.k8s.elastic.co/node-ingest=true
                    elasticsearch.k8s.elastic.co/node-master=true
                    elasticsearch.k8s.elastic.co/node-ml=true
                    elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
                    elasticsearch.k8s.elastic.co/version=7.6.1
                    statefulset.kubernetes.io/pod-name=quickstart-es-default-0
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      StatefulSet/quickstart-es-default
Init Containers:
  elastic-internal-init-filesystem:
    Image:      docker.elastic.co/elasticsearch/elasticsearch:7.6.1
    Port:       <none>
    Host Port:  <none>
    Command:
      bash
      -c
      /mnt/elastic-internal/scripts/prepare-fs.sh
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_IP:     (v1:status.podIP)
      POD_NAME:  quickstart-es-default-0 (v1:metadata.name)
      POD_IP:     (v1:status.podIP)
      POD_NAME:  quickstart-es-default-0 (v1:metadata.name)
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
      /mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
Containers:
  elasticsearch:
    Image:       docker.elastic.co/elasticsearch/elasticsearch:7.6.1
    Ports:       9200/TCP, 9300/TCP
    Host Ports:  0/TCP, 0/TCP
    Limits:
      memory:  2Gi
    Requests:
      memory:   2Gi
    Readiness:  exec [bash -c /mnt/elastic-internal/scripts/readiness-probe-script.sh] delay=10s timeout=5s period=5s #success=1 #failure=3
    Environment:
      HEADLESS_SERVICE_NAME:     quickstart-es-default
      NSS_SDB_USE_CACHE:         no
      POD_IP:                     (v1:status.podIP)
      POD_NAME:                  quickstart-es-default-0 (v1:metadata.name)
      PROBE_PASSWORD_PATH:       /mnt/elastic-internal/probe-user/elastic-internal-probe
      PROBE_USERNAME:            elastic-internal-probe
      READINESS_PROBE_PROTOCOL:  https
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
      /usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
      /usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  elasticsearch-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  elasticsearch-data-quickstart-es-default-0
    ReadOnly:   false
  downward-api:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
  elastic-internal-elasticsearch-bin-local:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  elastic-internal-elasticsearch-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-default-es-config
    Optional:    false
  elastic-internal-elasticsearch-config-local:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  elastic-internal-elasticsearch-plugins-local:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  elastic-internal-http-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-http-certs-internal
    Optional:    false
  elastic-internal-probe-user:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-internal-users
    Optional:    false
  elastic-internal-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      quickstart-es-scripts
    Optional:  false
  elastic-internal-transport-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-transport-certificates
    Optional:    false
  elastic-internal-unicast-hosts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      quickstart-es-unicast-hosts
    Optional:  false
  elastic-internal-xpack-file-realm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-xpack-file-realm
    Optional:    false
  elasticsearch-logs:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  55s (x15 over 11m)  default-scheduler  pod has unbound immediate PersistentVolumeClaims (repeated 3 times)

Is there any way to create this claim manually?

If you do a kubectl describe on the pvc you should be able to see more detail on why it cannot be bound. It may be something like there is no dynamic provisioner for the storage class you specified. You may need to specify a different storage class or manually create a PV that satisfies the claim.

Thank you very much for the fast answer. It gave me some insights where to look into and it is now very similar to https://github.com/elastic/helm-charts/issues/332

I have checked 'kubectl describe pvc'. pvc is here but it is in error state:

Name:          elasticsearch-data-quickstart-es-default-0
Namespace:     default
StorageClass:
Status:        Pending
Volume:
Labels:        common.k8s.elastic.co/type=elasticsearch
               elasticsearch.k8s.elastic.co/cluster-name=quickstart
               elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Events:
  Type       Reason         Age                     From                         Message
  ----       ------         ----                    ----                         -------
  Normal     FailedBinding  4m51s (x3643 over 15h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
Mounted By:  quickstart-es-default-0

If I issue 'kubectl describe storageclasse I get tons of them (only names are listed below):

    efs-performance
    efs-standard
    nfs-rw
    obs-standard
    obs-standard-ia
    sas
    sata
    ssd

Can you tell me more details how ES Kubernetes Operator works? Which storage class it needs initially?

As suggested on github there should be "default" storage class. But when I look into Google Cloud Platform where ECK works perfectly theres is no "default" storage class as well

You can specify the storage class you want to use in the volumeClaimTemplates section of the Elasticsearch spec.
If you don't specify one, the default one is used.

But when I look into Google Cloud Platform where ECK works perfectly theres is no "default" storage class as well

Unless you customized it, there is a default on GKE clusters.

Thank you very much Sebastien. To be absolutely sure I understand it correctly, volumeClaimTemplates section refers to Elasticsearch cluster specification?

So inside this simple cluster specification of type Elasticsearch I can additionally specify
storage class as you suggested:

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.6.1
  nodeSets:
  - name: default
    count: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
EOF

Yes! You can do something like this:

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.6.1
  nodeSets:
  - name: default
    count: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 500Gi
        storageClassName: ssd
EOF

You should figure out what's the best storage class depending on your use case. Network-attached storage generally provides simpler operations (volumes can be moved from one host to another), but poorer performance when compared to local storage.

1 Like

Is it possible to use pvc instead of storageClassName? Here is what I have already in my cluster:

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS   REASON   AGE
pvc-dc8b5579-679e-11ea-87a6-fa163efd60f2   10Gi       RWX            Delete           Bound    default/cce-evs-k7unitqv-9pp2   sata                    23h

I'm asking because if I modify class as suggested above, I get

Conditions:v1.PersistentVolumeClaimCondition(nil)}}}: Volume claim templates cannot be modified

I guess I can get rid of it by redeploying whole ES stuff from scratch, but maybe it is possible to use existing claim?