Hello!
I try to deploy Elasticsearch using ECK.
But I have problem with Persistence Volume.
My elastic manifest
kind: Elasticsearch
metadata:
name: opencti-elastic
spec:
version: 8.11.4
volumeClaimDeletePolicy: DeleteOnScaledownOnly
nodeSets:
- name: master
count: 3
config:
node.roles: ["master"]
node.store.allow_mmap: false
podTemplate:
spec:
containers:
- name: elasticsearch
env:
- name: READINESS_PROBE_TIMEOUT
value: "10"
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
resources:
requests:
memory: 4Gi
cpu: 1
limits:
memory: 4Gi
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "local-storage"
My persistence volume manifest
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data
labels:
type: local
spec:
storageClassName: local-storage
volumeMode: Filesystem
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
I create pv elasticsearch-data sucessfully, but only one pvc can bound to it
root@k8s-master-test:~# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
elasticsearch-data 5Gi RWO Retain Available local-storage 4s
root@k8s-master-test:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-opencti-elastic-es-master-0 Bound elasticsearch-data 5Gi RWO local-storage 11s
elasticsearch-data-opencti-elastic-es-master-1 Pending local-storage 11s
elasticsearch-data-opencti-elastic-es-master-2 Pending local-storage 11s
root@k8s-master-test:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
opencti-elastic-es-master-0 1/1 Running 0 39s
opencti-elastic-es-master-1 0/1 Pending 0 39s
opencti-elastic-es-master-2 0/1 Pending 0 39s
#kubectl describe pods opencti-elastic-es-master-1
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 86s (x3 over 90s) default-scheduler 0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 4 node(s) didn't find available persistent volumes to bind. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling..
How I can use one PV for several pods?