Hello,
I'm trying to deploy ECK into Open Telekom Cloud. Before I have tested it in GCP Kubernetes Engine and it works like a charm.
Unfortunately I have some problem in Open Telekom Cloud related to PersistentVolumeClaims. I'm using https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html and it seems pvc is not created for quick-start-es-default-0 pod:
kubectl describe pod quickstart-es-default-0
Name: quickstart-es-default-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: common.k8s.elastic.co/type=elasticsearch
controller-revision-hash=quickstart-es-default-5d7cbf6c5b
elasticsearch.k8s.elastic.co/cluster-name=quickstart
elasticsearch.k8s.elastic.co/config-hash=2034778696
elasticsearch.k8s.elastic.co/http-scheme=https
elasticsearch.k8s.elastic.co/node-data=true
elasticsearch.k8s.elastic.co/node-ingest=true
elasticsearch.k8s.elastic.co/node-master=true
elasticsearch.k8s.elastic.co/node-ml=true
elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
elasticsearch.k8s.elastic.co/version=7.6.1
statefulset.kubernetes.io/pod-name=quickstart-es-default-0
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/quickstart-es-default
Init Containers:
elastic-internal-init-filesystem:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
Port: <none>
Host Port: <none>
Command:
bash
-c
/mnt/elastic-internal/scripts/prepare-fs.sh
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
/mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
Containers:
elasticsearch:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
Limits:
memory: 2Gi
Requests:
memory: 2Gi
Readiness: exec [bash -c /mnt/elastic-internal/scripts/readiness-probe-script.sh] delay=10s timeout=5s period=5s #success=1 #failure=3
Environment:
HEADLESS_SERVICE_NAME: quickstart-es-default
NSS_SDB_USE_CACHE: no
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
PROBE_PASSWORD_PATH: /mnt/elastic-internal/probe-user/elastic-internal-probe
PROBE_USERNAME: elastic-internal-probe
READINESS_PROBE_PROTOCOL: https
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
/usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
/usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
elasticsearch-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-data-quickstart-es-default-0
ReadOnly: false
downward-api:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
elastic-internal-elasticsearch-bin-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
elastic-internal-elasticsearch-config:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-default-es-config
Optional: false
elastic-internal-elasticsearch-config-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
elastic-internal-elasticsearch-plugins-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
elastic-internal-http-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-http-certs-internal
Optional: false
elastic-internal-probe-user:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-internal-users
Optional: false
elastic-internal-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickstart-es-scripts
Optional: false
elastic-internal-transport-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-transport-certificates
Optional: false
elastic-internal-unicast-hosts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickstart-es-unicast-hosts
Optional: false
elastic-internal-xpack-file-realm:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-xpack-file-realm
Optional: false
elasticsearch-logs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 55s (x15 over 11m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Is there any way to create this claim manually?